What Graykey Can and Can’t Unlock

This is from 404 Media:

The Graykey, a phone unlocking and forensics tool that is used by law enforcement around the world, is only able to retrieve partial data from all modern iPhones that run iOS 18 or iOS 18.0.1, which are two recently released versions of Apple’s mobile operating system, according to documents describing the tool’s capabilities in granular detail obtained by 404 Media. The documents do not appear to contain information about what Graykey can access from the public release of iOS 18.1, which was released on October 28.

More information:

Meanwhile, Graykey’s performance with Android phones varies, largely due to the diversity of devices and manufacturers. On Google’s Pixel lineup, Graykey can only partially access data from the latest Pixel 9 when in an “After First Unlock” (AFU) state—where the phone has been unlocked at least once since being powered on.

Posted on November 26, 2024 at 7:01 AM18 Comments

Comments

Brian November 26, 2024 8:19 AM

I really like the direction in which the mobile device manufacturers are going. Not so much from an anti-LE standpoint, but from an overall you-can’t-have-my-data-without-my-permission standpoint!

Clive Robinson November 26, 2024 9:10 AM

This is in effect a “throw your turn” game like “snakes and ladders”.

You all take one or two steps forwards in turn. In general you both progress, but sometimes you fall back down a lot.

There is in effect only three ways to reliably not loose,

1, Cheat any which way you can (timeless advice).
2, Remove your opponent from the board (many millennium old advice).
3, Do not play (1983 advice).

The cheating is seen by many as “bad” but turns to “good” when the opponent is “bad”. This suggests a campaign of,

“Blacken their name in every ones eyes”

Is more than “half the battle”, hence the continuous dog whistles of guard labour and their masters of,

“Think of the Children”

And worse, used to make those who only want privacy look evil in every way possible. It’s in effect a form of jury tampering in that the aim is to not allow a fair trial (game).

Another way is to stop an opponent playing, the modern version is by legislation, prosecution and incarceration. The problem is that the second two steps never happen to Guard Labour, where it’s more likely to be rewarded with promotion.

Which leaves “Do not play”. There are four basic ways to do this,

1, Don’t have any communications at all.
2, Use a method of communication that is not amenable to an opponents attack.
3, Don’t use the phone in any bad way.
4, Have a phone or communications system beyond your opponents capabilities.

In the modern world the first way is I’m repeatedly told “not practical”, which is odd as it certainly was for the first quarter of my life as the technology did not realistically exist and later not for ordinary folk.

The second way has many forms but secure communications was known about in the Victorian era. Where the first steps to making it easy to use by automation were in development.

The third “don’t do anything bad” advice is actually fairly pointless other than “faux appeasement”. Because there are always those who get to chose if you are bad or not, not by your actions but by their desires and dictates.

The fourth way stops the collection of actual evidence and thus is hated by guard labour and the like that have to show in public you are “bad” in front of supposedly impartial arbiters of truth like a jury.

Which is why this sort of game is also one of “cat and mouse” and staying ahead means using your opponents strengths against them as well as exploiting any weaknesses they might have.

They in turn try to “up their game” at every turn and learn not just by your actions but the actions of others.

In effect,

“You have to not loose every time, they have to only win once.”

As for a “no score draw” or similar where neither wins, that’s a fairly significant warning it’s,

“Time to be some other place, doing something entirely different, if you want to survive.”

If you can not “be somewhere else” or “doing something else” then you need several entirely different game plays in reserve with a rapid way to switch to them.

One thing is certain, if you in effect,

“Stand still and so the kill”

You will not remain free or unharmed very long.

In effect this game is another version of,

“The hamster wheel of pain”

Worse even than “The Red Queen’s Race”.

Thus I hate to say it but the best tactic by use of resources etc is to take not “the opponent out of the game” but those the opponent is very much dependent upon.

If those who design, make and sell “GreyKey” cease to be suppliers of necessary updated resources then the opponent becomes in effect a static player and roles are reversed.

There are various ways to do this, one is to blacken their and similar organisations names to the point that making what they do, not just illegal, but subject to actual meaningful punishment of individuals such that they actually feel in fear if they go down the path, that they have formerly seen as profitable.

Eriadilos November 26, 2024 11:42 AM

Well, Graykey can’t unlock it YET.

In the threat model of activists and such should also be seizure by police, the phone being kept in charge and in a faraday box for 6 months and then Graykey being used.

With the state of mobile security being what it is, if a device is not updated for a moderate period of time, it becomes vulnerable to those tools.

TimH November 26, 2024 11:58 AM

Looks like the rule is to turn you phone off ahead of any LEO or border interaction.

A quick-off hotkey would be good.

Garabaldi November 26, 2024 3:24 PM

@Eriadilos

Well, Graykey can’t unlock it YET.

This seems so important that it needs repeating. More than once, but I will refrain.

As far as I know Enigma was secure in 1926.

Daniel Popescu November 26, 2024 4:10 PM

@Clive – you never cease to delight us with your comments, thanks. What happened in 1983 or what’s the context for that reference?

Not really anonymous November 26, 2024 5:01 PM

@Eriadilos
That threat model should be applied to any hidden data.
There was a guy (Rajib Mitra) who got pissed off at law enforcement because he was abused while being held in jail. (I happened to hear an FBI agent laugh while relating this part of the story at a security meeting. Probably an infragard meeting, but maybe a local security mini-conference.) He retalliated by playing porn over law enforcement comm channels. He was overcharged and convicted of a terrorism related offense for that retalliation. They also saved some encrypted files from some seized equipment. By the time he was getting close to release, the type of encryption used had been broken and it turned out he had a nude picture of his girlfriend, who was under 18 at the time the picture was taken. He was then convicted of a child porn related charge based on the data that was initially unreadable, but that became readable later. Very shortly afterwards he committed suicide.

Clive Robinson November 26, 2024 8:31 PM

@ TimH, ALL,

With regards,

“A quick-off hotkey would be good.”

From whose perspective?

As an experiment back at the turn of the century when mobile phones were much simpler, I developed a “Molly Button” / “quick kill” for a device I mainly used as a “Personal Organiser”.

Data at rest was always encrypted, and data that was in certain applications like editors was also encrypted even when in use.

The encryption keys where “ephemeral” and protected in various ways (that are nolonger advisable these days).

On hitting the “Molly button the keys were scrambled (simple erasing is not a good idea as even with good quality RAM and anti-burnin techniques there are potential issues). Then the RAM and then the power was turned off.

The problem was,

1, Molly Button to easy to activate accidentally.
2, Files that were being modified did not get written back to mutable memory / storage.

Believe me after you watch an hour or so’s work go down the crapper accidentally even for the second or third time the desire to “disable the feature” or make it way more difficult to accidently activate is very strong.

Thus the third problem,

3, An accident reducing feature, makes it all to easily possible for adversaries to “Hit and grab”.

Simply shoving someone hard in the back is usually sufficient to stop even a simple Molly Button being activated.

Thus a simpler system was developed where by you put a finger through a loop. If the device was grabbed the loop would make activation way more probable. A second system was like a “Dead-man’s switch” or “hand-grenade arm”. Being held against a spring, simply letting go caused activation.

The problem with this is that it’s all to easy to move your fingers sub consciously after even a relatively short period to avoid “cramping up” or similar.

I went on to look at using a “Java Ring” but these had other issues.

Simple as the idea is, is the issue that the human operator is fallible even at the best of times, and does not like security features getting in their way of their work flow.

So first you need to “fix the human” and that is almost impossible to reliably achieve… as humans all have human failings and most can not evaluate risk at the best of times even when they are likely to come to considerable harm…

Clive Robinson November 26, 2024 8:50 PM

@ Daniel Popescu, and others too young.

1983 was a year when fairly good security advice came from being entertained…

An AI had to learn almost the hard way what MAD really ment and thus passed an opinion,

https://m.youtube.com/watch?v=NHWjlCaIrQo

I don’t know how many millions of people have seen it but strangely we humans do not appear to have learnt from it 🙁

ResearcherZero November 26, 2024 10:21 PM

Set your phone to at least reboot every twelve hours. Wipe it remotely if stolen/seized.
Alternatively, have a trusted party who can wipe it for you if you yourself are indisposed.

ResearcherZero November 26, 2024 10:24 PM

Concepts of Privacy: Why do we need it?

‘https://link.springer.com/chapter/10.1007/978-3-031-51063-2_2

The plurality of identification.
https://knightcolumbia.org/content/anonymity-identity-and-lies

“privacy is valuable not because it empowers us to exercise control over our information, but because it protects against the creation of such information in the first place.”

https://www.hup.harvard.edu/features/what-is-privacy-for

Privacy strengthens other rights.
https://www.csis.org/analysis/right-be-left-alone-privacy-rapidly-changing-world

Mohammed Khan November 26, 2024 10:38 PM

@Eriadilos: It is not always neccessary to have perfect forward security. You can look, for example, at what the UK regime has been doing, detaining if not arresting any journalist in communication with sources in Palestine when they cross the border. Under ‘Operation Incessantness’, information about those sources- often local journalists- is then passed onto the Israelis so they can target them for murder.

In that situation, if a device can be kept protected from the UK pigs and their puppetmasters for a year or two, it reduces the danger to the locals to no more than the already very high danger that any other legitimate inhabitant of Palestine faces.

cls November 27, 2024 4:20 AM

What happened in 1983 or what’s the context for that reference?

American movie, War Games

teenager with an (already obsolete) IMSAI and a 300 baud modem dials up US national security computer, hijinks ensue.

ATN November 27, 2024 7:19 AM

Daniel Popescu • November 26, 2024 4:10 PM

What happened in 1983 or what’s the context for that reference?

https://www.imdb.com/title/tt0086567/ film “WarGames”.
Worm Computer: Strange games [both tic-toc and war games], only winning strategy is not to play; how about a nice game of chess?

Eriadilos November 27, 2024 11:46 AM

@Mohammed Khan
This may or may not hold true, the thing is that the future can not be predicted.
This is why the threat model should always assume the worst in my opinion. As for your example, seeing the recent Israeli operations, assuming the absolute worst seems like a good bet.

@Clive
The kill switch seems like a nice idea in theory, but for consumer products it seems like too much of a hastle to be adopted. As always balance has to be found between security and usability has to be found, which as you described is not an easy thing.

ResearcherZero November 28, 2024 12:52 AM

The future can be predicted, but you may not be able to change the course of events.
Therefore the threat model should always assume the worst, without action to prevent it.

We know that without adequate privacy law we will have no privacy.

You are what you are assumed to be.

‘https://privacyinternational.org/long-read/5472/chatbots-adbots-sharing-your-thoughts-advertisers

Identity is a process of becoming rather than being.
https://journals.sagepub.com/doi/full/10.1177/2158244020934877

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.