Lots More Writing about the FBI vs. Apple

I have written two posts on the case, and at the bottom of those essays are lots of links to other essays written by other people. Here are more links.

If you read just one thing on the technical aspects of this case, read Susan Landau's testimony before the House Judiciary Committee. It's very comprehensive, and very good.

Others are testifying, too.

Apple is fixing the vulnerability. The Justice Department wants Apple to unlock nine more phones.

Apple prevailed in a different iPhone unlocking case.

Why the First Amendment is a bad argument. And why the All Writs Act is the wrong tool.

Dueling poll results: Pew Research reports that 51% side with the FBI, while a Reuters poll reveals that "forty-six percent of respondents said they agreed with Apple's position, 35 percent said they disagreed and 20 percent said they did not know," and that "a majority of Americans do not want the government to have access to their phone and Internet communications, even if it is done in the name of stopping terror attacks."

One of the worst possible outcomes from this story is that people stop installing security updates because they don't trust them. After all, a security update mechanism is also a mechanism by which the government can install a backdoor. Here's one essay that talks about that. Here's another.

Cory Doctorow comments on the FBI's math denialism. Yochai Benkler sees this as a symptom of a greater breakdown in government trust. More good commentary from Jeff Schiller, Julian Sanchez, and Jonathan Zdziarski. Marcy Wheeler's comments. Two posts by Dan Wallach. Michael Chertoff and associates weigh in on the side of security over surveillance.

Here's a Catholic op-ed on Apple's side. Bill Gates sides with the FBI. And a great editorial cartoon.

Here's high snark from Stewart Baker. Baker asks some very good (and very snarky) questions. But the questions are beside the point. This case isn't about Apple or whether Apple is being hypocritical, any more than climate change is about Al Gore's character. This case is about the externalities of what the government is asking for.

One last thing to read.

Okay, one more, on the more general back door issue.

EDITED TO ADD (3/2): Wall Street Journal editorial. And here's video from the House Judiciary Committee hearing. Skip to around 34:50 to get to the actual beginning.

EDITED TO ADD (3/3): Interview with Rep. Darrell Issa. And at the RSA Conference this week, both Defense Secretary Ash Carter and Microsoft's chief legal officer Brad Smith sided with Apple against the FBI.

EDITED TO ADD (3/4): Comments on the case from the UN High Commissioner for Human Rights.

EDITED TO ADD (3/7): Op ed by Apple. And an interesting article on the divide in the Obama Administration.

EDITED TO ADD (3/10): Another good essay.

EDITED TO ADD (3/13): President Obama's comments on encryption: he wants back doors. Cory Doctorow reports.

Posted on March 1, 2016 at 6:47 AM • 87 Comments

Comments

Clive RobinsonMarch 1, 2016 7:30 AM

And so the dance continues, at each round the FBI appears to lose ground not just with public opinions but in their honesty, thus trustworthyness.

As I've observed Hoovers ghost is still walking the FBI HQ corridors...

Just think what might have remained unknown if the FBI / DOJ had not told the press even before the magistrate had inked their paper...

Which has the obvious rider of "What is yet to be known?".

AlanSMarch 1, 2016 7:39 AM

Benkler:

The fundamental problem is the breakdown of trust in institutions and organizations. In particular, the loss of confidence in oversight of the American national security establishment.

Breakdown? If there was ever any trust in the national security establishment or confidence in its oversight, it was misplaced. There's a long history of misdeeds stretching back from the present to the earliest days of the modern security establishment. These sort of secret institutions are inherently untrustworthy. We supposedly need them to protect our democracy from threats but they themselves are a deeply threatening to democratic institutions.

RonKMarch 1, 2016 7:50 AM

> Michael Chertoff and associates weigh in on the side of security.

This probably should have been worded differently: "security" against what? I had to download the linked paper to make sure that Chertoff hadn't done another 180 degree turn again...

StardustMarch 1, 2016 8:50 AM

Where did climate change deniers come from? It's easy to call them stupid and leave it at that. Few people know enough to determine for themselves whether climate change is real. Most need to trust someone else. When Al Gore first took the stand global warming was immediately used to hit political opponents over the head. This is the source of deniers. THEY DON'T TRUST THE CLAIMERS.

I thought you were an expert on trust. If people don't trust Apple they will side with the FBI. If they don't trust the FBI they will side with Apple.

There are a lot of angry people, in tech too. They like to claim everyone is stupid except for them. No one will trust them. It will hurt Apple most.

chuckbMarch 1, 2016 9:06 AM

Anyone with any doubt about how poor the case is for decreasing security need only read Stewart Baker.

BoppingAroundMarch 1, 2016 9:25 AM

> I thought you were an expert on trust. If people don't trust Apple they will side with the
> FBI. If they don't trust the FBI they will side with Apple.

Or neither, suspecting that both sides may have ulterior motives. As well as the whole situation being a bit (quite a bit) more complex than what they try to paint it with ('consumer trust', 'protecting your privacy', 'protecting your security', whatever).

Anonymous CowMarch 1, 2016 11:02 AM

Don't know if you've seen TechDirt's post on the New York case but a comment therein brings up a good point: if the California and New York cases creates a circuit split that results in the California case being petitioned to the Supreme Court a 4/4 split decision results in standing for the original decision (assuming the Court of Appeals upholds the government's position).

StardustMarch 1, 2016 11:21 AM

How can you say Al Gore's character doesn't matter? You couldn't be more wrong.

Any sales person can tell you, if they don't trust you they won't buy from you. You can stand on your head, you can demo all the advantages you want. They might even agree yours is the better product. But they won't buy from you if they don't trust you.

You speak of facts. Whose facts? Your facts or the other guy's facts? People are bombarded by "facts" all day long, everyday. You yourself are no different. Do you really understand climate change the way and to the extent that climate scientists understand it? At some point you have to trust somebody. But history shows the real smart person with all the facts can eventually be proven wrong, very wrong. Like Ptolemy, George Darwin and his amazing theory about the moon that had everyone convinced, until rocks from the moon returned to Earth.

Trust who? The guy that looks and acts like a game show host? Should you trust McAfee who looks and acts like like a psychiatric case? Others like to play the "I'm smart so you should believe me." Until they're wrong.

Mark MayerMarch 1, 2016 11:32 AM

Stewart Baker's article is a good read. There's stuff in there that I haven't seen brought up before regarding Apple's activities in China, including known compromises as well as speculation on compromises. The stuf about WAPI is especially damning.

My overall feeling re: China is that they don't absolutely need a backdoor because they, too, have bulk data collection and metadata mining/filtering techniques similar to the NSA. The Chinese security services don't have the same legal firewalls as we do between the FBI and the NSA. I think we can assume their are no legal obstacles to stop Chinese intelligence from sharing with Chinese law enforcement. The supposed deal that China has made with the tech industry -- no backdoors for us if no other government has them -- is entirely plausible and in line with Chinese political goals of economic development.

That said, I don't think Baker's article really undermines Apple's case. It's a very fair criticism of Apple, but doesn't address the weaknesses of the government's case nor the larger issues at stake.

Mark MayerMarch 1, 2016 11:40 AM

Also, the Orenstein decision is a pretty good read.
https://www.documentcloud.org/documents/2729399.html#document/p1

It's entertaining if you know how to translate restrained legal language. For example, a "particularly unconvincing" argument translates to "this argument is complete bullshit". Orenstein basically says that all of the government's arguments are a crock of shit. If Orenstein didn't have to write a formal legal document in legalese, his decision could have been, "The government's request is denied because their arguments are lame."

hermanMarch 1, 2016 11:59 AM

And yet, pretty much nobody will put their money where their mouths are and buy equipment that is inherently more secure than average: Apple Mac, Linux, BSD, Blackphone... all continue with an uphill struggle.

A Telco Security DweebMarch 1, 2016 12:06 PM

Earth To Stewart Baker :

"Two 'wrongs' (both the Chinese and U.S. governments, demanding backdoors into Apple's products), don't make a 'right'."

SasparillaMarch 1, 2016 12:15 PM

For entertainment value if nothing else, don't miss Paul McAfee's description of what the govt would need to do to gain access to the iPhone if it really wanted to and what he thinks the proposed National Committee on Digital Security would be used for:

https://www.youtube.com/watch?v=MG0bAaK7p9s

Mark MayerMarch 1, 2016 12:20 PM

@Anonymous Cow
The San Bernardino court is subordinate to the 9th circuit. The N.Y. court is subordinate to the 2nd circuit.

If anyone is handicapping for wagering purposes, the 9th circuit has more democratic appointees than other circuits, but it is also the largest circuit. It has been accused by conservatives as having a liberal bias (but when don't they make accusations of liberal bias?)

We should note that the territory of the 9th circuit is "tech heavy". I'm not suggesting that they are biased towards the tech industry, but I do think it likely that the judges are collectively more "tech savvy" with regard to the intersections of law and technology. Which in not to say that 2nd Circuit judges are ignoramuses; I don't know enough to say one way or the other.

Another thing to keep in mind is that these are different cases beyond the jurisdictional differences. If both are appealed all the way, it's conceivable that even a full Supreme Court could could let "contradictory" decisions stand by narrowly considering the particulars of each case. Besides which, it will be some time before either case makes it to the Supreme Court. The U.S. Senate can/will delay the appointment of a replacement justice for the rest of Obama's term, but they will have to confirm an appointment during the next president's term sooner rather than later. I predict that when/if either case gets to the Supreme Court, there will be a full panel.

DRMMarch 1, 2016 12:25 PM

What Paul McAfee explains is explicitly illegal. You can thank WIPO.

On top of that, I'm not 100% certain he's right. SOC systems are not generic PCs.

Anonymous PosterMarch 1, 2016 12:43 PM

Worryingly, in the United Kingdom the law is being amended to:

extend the use of state remote computer hacking from the security services to the police in cases involving a “threat to life” or missing persons. This can include cases involving “damage to somebody’s mental health”, but will be restricted to use by the National Crime Agency and a small number of major police forces.

The expansion of police powers to access web browsing history as part of their investigations follows pressure from the police, and the use of these powers does not need the “double-lock” ministerial authorisation.

http://www.theguardian.com/uk-news/2016/mar/01/snoopers-charter-to-extend-police-access-to-phone-and-internet-data

http://news.sky.com/story/1651458/snoopers-charter-reboot-extends-spying-powers

Damaging somebody's "mental health" is very widely worded.

It's also being rushed through Parliament (to avoid scrutiny) despite three independent reviews severely criticising the proposed law.

albertMarch 1, 2016 2:45 PM

@Anonymous Poster,
"...“damage to somebody’s mental health”..."

Kinda funny when you think about it. Who better to determine mental health damage than those who are already mentally damaged.

Re: Gates sides with the FBI. Looks like Bill has become irrelevant as a technologist (if he ever was). At least he's not a politician, where he could do some serious damage.

. .. . .. --- ....

SkepticalMarch 1, 2016 3:02 PM


Baker's little masterpiece points, more artfully and more clearly, to what I found so completely annoying about Apple's PR campaign.

I've also read Judge Orenstein's opinion, which is frankly a model of what a court ought not to do (and not simply because I dislike the result).

He clearly intends the opinion to have broad persuasive effect. He notes that he specifically refused the ACLU's request to file an amicus brief in the matter, and asked Apple to address the legal question of the applicability of the AWA (previously Apple studiously avoided the question, instead providing factual information as to what actual services would be required to meet the government's request). He did this, he writes, because Apple was in a better position to address the AWA issue than the ACLU (for reasons we never learn - and indeed most of Orenstein's opinion has nothing to do with Apple in particular).

Beyond that clear tell, his opinion extends far beyond what is necessary for his order. It reads like a brief, with a series of arguments in the alternative: p, because q; but even if not q, p because r; but even if not r, p because s; but even if not s, p because t....

He goes so far as to consider, unnecessarily, whether the AWA would be constitutional if he were to accept the government's argument - which is frankly remarkable.

In an incomplete nutshell - I'm going to skip some parts - he finds there to be a comprehensive statutory framework already in place to address the issue of the extent to which the government may compel Apple to assist in the retrieval of information at rest. This framework is, apparently, scattered across various sections of federal law, and is described in the opinion with little more than the classic hand-waving and an example. Because this framework exists, the AWA is inapplicable here.

Moreover, he finds that if he were to accept the government's argument, the AWA would have no principled boundary to its application, leading to an absurd and unconstitutional result. For example, he believes that the government's proposed interpretation would permit a court to order a pharmaceutical company to manufacture a drug for the purpose of effecting a legally ordered execution if the company refused to do so. As laws, if possible, are to be interpreted in such a way as to avoid that result, and as he can interpret the AWA in an alternative fashion here, he must reject the government's argument.

Finally, even if the AWA did apply, he argues that it would place an unreasonable burden upon Apple - not because of the dollar cost of the services required - but because Apple "finds it offensive" and because it would be, according to Apple, contrary to the kind of company Apple wants to be (and he takes Apple at their word because, after all, this is actually a difficult policy question and why shouldn't he believe that Apple has a reasonably different view than the government).

In his mercifully final sentences he writes most eloquently and sparingly, wherein for a last time he strikes his repeated point that the legislature and not the courts ought to answer the complex policy questions posed by this issue.

Almost none of his arguments stand to scrutiny. His notion of an unreasonable burden is so unbelievably deferential to a company that it would permit any company to claim an order issued otherwise lawfully under the AWA to be an unreasonable burden merely by virtue of the fact that the order conflicts with the company's vision of itself and its values. Read in conjunction with Baker's letter to Tim Cook, Orenstein's section on "unreasonable burden" appears remarkably naive and almost ludicrous. Nor will anyone find much support for such an analysis in caselaw - though note that Orenstein carefully states that his analysis here would be a matter of discretion in the application of the law (shielding such an analysis, he hopes, from a more rigorous appellate review).

His eloquence in advocating for Congress as the branch better suited to resolve the difficult policy issues undermines his argument that the AWA does not apply because Congress has already created a statutory framework addressing the issue at hand. Clearly, Congress has not addressed the issue - its inaction has left a vacuum in the law, a space between the government's duty to execute lawful search warrants in the interest of the public welfare, the court's power to authorize such warrants, and the government's ability to effect such warrants without the compelled cooperation of third-parties such as Apple.

It is precisely to address such a vacuum that the AWA exists, and has long existed. Congress can act, and render the AWA inapplicable. But until and if they do, the vacuum remains, and the law provides the court with a remedy to address that vacuum.

Orenstein fumbles the delivery here a bit, though, by inexplicably claiming that if he were to issue an order under the AWA, he would thereby prevent Congress from taking action. I'm sure he has something along the lines of, "if the courts issue orders under the AWA, then the pressure on Congress to act is reduced" in mind. But that's a political consideration, not a legal consideration, and certainly not a reason for refusing to issue an order under a law that is expressly designed to enable courts to act where Congress has left gaps in the legal apparatus effecting the law.

CallMeLateForSupperMarch 1, 2016 3:30 PM

Susan Landau:
"...instead of embracing {encryption and locking
down devices} as an important and crucial
security advance, law enforcement has largely
seen such technologies as an impediment to
lawfully authorized searches. This is a
twentieth-century approach to a twenty-first
century problem - but in that fact lies the
possibility of a solution."

She is saying, FBI needs to change its business model. Ironic: this is what Comey told Apple's Cook he needed to do.

CallMeLateForSupperMarch 1, 2016 3:48 PM

@any who use NoScript

My attempt to load the Jonathan Zdziarski page coughed up this unseemly greeting:

"Please turn JavaScript on and reload the page.
DDoS protection by CloudFlare"

No thanks, J.Z.

paranoia destroys yaMarch 1, 2016 3:57 PM

The All Writs Act was signed into law by George Washington a day before the Bill Of Rights.
All Constitutional amendments supersede it, a local lawyer even invoked the 13th over the issue of "involuntary servitude".

ArthurMarch 1, 2016 4:07 PM

@Stardust

"Few people know enough to determine for themselves whether climate change is real."

You make me think about the irresponsible speech of DiCaprio at the oscars ceremony saying :

"Climate change is real and its happening right now it is the most urgent threat facing our entire species" https://www.youtube.com/watch?v=taTv2GGjtE0

Climate change is the consequence of overpopulation.
Overpopulation IS the most urgent threat facing our entire species.
Stop reproducing like rabbits and things will be better.

TõnisMarch 1, 2016 4:40 PM

@Arthur & @Stardust, as I always say, I don't even care if climate change is real. In less than a hundred years I'll be dead. And now, I'm feeling a bit chilly ... On my way to the thermostat to increase my carbon footprint!

SkepticalMarch 1, 2016 4:55 PM


@LateforSupper:

A change of model may yet occur, but it's not one that any civil libertarian would be happy about.

Let's be clear: the alternative to asking Apple for a service is asking Apple for the information required to build the service without Apple.

So Apple wins, right? It doesn't need to provide any services to the government. But instead it will need to provide copies of its most closely guarded information, which previously the U.S. Government (at least) avoided asking for by instead allowing Apple to perform the needed services.

Put another way: right now, the US Government is asking Apple for a fish. And the practice has been, apparently, that when the U.S. Government needed a fish, it would ask Apple to help it get one. This time the U.S. Government asked for a fish that required Apple to tie a new hook; Apple refused, and is refusing in other cases where its prior practice was to cooperate.

Apple is now saying to the U.S. Government: no, we won't do that. So, what will the U.S. Government do?

It will learn to fish for itself. To do so, it may need to request certain information from Apple.

If you're comfortable with the protections of U.S. law, then you need not be concerned. If you preferred having the involvement of a third party like Apple as well in such matters, well, then quite frankly Apple is about to screw you in the name of public relations.

It's great for Apple though. It can distance itself from the U.S. Government and push harder into certain foreign markets where such an association might be unwelcome. Sure, such foreign markets might require certain adjustments be made - but good luck getting that information.

Ironically it can even use the stance as a recruiting tool for talent among the less politically and commercially aware.

In short, there is very, very little about this that does not reek of corporate manipulation and the pursuit of self-interest.

"yoshi2"March 1, 2016 5:00 PM

On February 29th I posted a reply to one of the blogs on this topic. The post was successful, yet today my post is missing. Is this censorship? Any why? If there is a specific reason why I was censored, censorers, please send me an email.

In my post I had mentioned the concept that cryptanalytical breaks do not require "backdoors" to be created in order to circumvent cryptological locking technologies.

I used an analogy of a locksmith being hired to unlock anything. In order to unlock a door the locksmith doesn't create a new door in your house or car. They utilize their "bag of tricks", which can be myriad, to unlock the door. Yes, as all anologies are, it's not entirely that simple, but it gets the point accross.

To unlock anything, the cryptanalytical cracking techniques needn't be anything like a backdoor.

So all of this sensationalism surrounding the overuse of the term "backdoor" is entirely misleading and misses the whole point of getting to information in order to aid counter-terrorism.

I have the feeling that alot of this recent flurry of overhyped discussion about this stuff is designed to smoke people like us out of the woodwork in general. I have this opinion, because clearly this conversation is not a sound technological discussion of cryptological techniques. But of course, that would be somewhat odd if it were that either. So this sensationalism brings out the political and philosophical aspects of people's personalities. It lets others know where some of us stand on this, and how reactionary we are or aren't.

Meanwhile, a few decent replies have highlighted the probability that the NSA would allegedly have some cracking capabilities to overwhelm and or circumvent or replicate the Apple devices in question--those involved with the anti-terrorism cases. And it's not wild speculation to consider the FBI consulting with and/or partnering with the NSA (and other organisations) to get to the information in question.

So I think we need to be more specific about what we are talking about.
In terms of the cryptological circumventions, it's not all about "backdoor" technology. That's actually a separate concept when it comes to the technical issue of how to get at the information in question.

That's all I have to say about this.

Sincerely,
"yoshi2"

P.S.-hello PlaidMan :) NIce to see you here after all these years.

BUGSPLAT!March 1, 2016 5:02 PM

In which skeptical learns what happens to a piss-ant judge when she tries to push a half-trillion dollar company and its corporate oligarch over the line from in rem to in personam.

ianfMarch 1, 2016 5:16 PM


@ CallingYouLateForSupper

fine, you don't trust Zdziarski enought to let his CloudFlare add-on perform a basic browser check, weed out potential script injection tricks, etc., …

BUT you still expect Zdziarski to trust you—BECAUSE it is YOU?

Get real. Real soon, preferably.

PS. That JavaScript is needed presumably only for the initial CloudFlare check, not for browsing of the blog. I don't remember it being there before, so Jonathan must've been hit with something recently, and now acts once bitten twice shy.

ianfMarch 1, 2016 5:52 PM


@ yoshi2 On February 29th posted a reply to one of the blogs on this topic

Did you intend to write commented in one of the threads on this blog,

OR

posted something in some blog on this very topic?

If the former: you're in the wrong thread.

If the latter: try recalling which blog it was, and then complain there over your missing comment.

I tried to read the rest of it, but couldn't see what you wanted to say (I'm OK with that).

@ BUGSPLAT! In which skeptical learns what happens…

… perhaps if you could demean yourself to quote the actual "lesson," we (=I mean "I") wouldn't have to decode what either of you had "in mind."

NikoMarch 1, 2016 6:02 PM

@Stardust

I don't trust Apple. Imagine, if an important shareholder asked Tim Cook during a conference call how likely it was hackers would still the source code to the next generation of IOS or some important IP or trade secrets? Tim Cook would almost certainly reassure the shareholders how Apple used state of the art security, stealing their IP would be almost impossible, that's its R&D network had never been breached, etc. Yet, if asked about govtOS, he would explain how it would inevitably get hacked and stolen in a matter of days no hours after Apple finished writing it. Clearly, Apple would be lying about something.

There is a more reasonable argument that US rules of evidence might mean Apple loses control over the tool, and has to disclose it to lots of untrustworthy people, increasing the probably it will leak. However, that's not an inherent feature of "the math" or cryptography. That's primarily a legal issue.

Ding ding ding! What do we have for her Johnny?March 1, 2016 6:03 PM

This whole FBI/Apple shit is starting to sound like a soap opera.

Why don't they just blindfold each other and both go into the same dark hotel room. I mean, they're both bound to find each other's holes sooner or later. :P

Dirk PraetMarch 1, 2016 6:24 PM

@ Sasparilla, @ DRM

Paul McAfee? I suppose George Ellison and Ringo Barr are on the team too then?

@ Skeptical

For example, he believes that the government's proposed interpretation would permit a court to order a pharmaceutical company to manufacture a drug for the purpose of effecting a legally ordered execution if the company refused to do so.

It's less far-fetched than you would like us to believe. Several companies opposing the death penalty discontinued production of certain chemicals used in capital punishment by lethal injection. Resulting in a shortage thereof. I see no reason why under the government's interpretation of the AWA the state of Texas could not compel a pharmaceutical company to assist in executing a legally sanctioned death sentence of an inmate. Which is probably exactly the kind of absurdity Orenstein had in mind when stating that the government's interpretation of the AWA would lead to completely impermissible results.

His eloquence in advocating for Congress as the branch better suited to resolve the difficult policy issues undermines his argument that the AWA does not apply because Congress has already created a statutory framework addressing the issue at hand.

No, they don't. I read it as a very nuanced opinion in which he clearly states that although in his expert opinion there is another statutory framework in place, it would serve the common good if Congress were to speak out and settle the matter once and for all, ending any current and future "Hineininterpretierung" of, as well as perceived overreach through a 220 year old statute and where it does and does not apply today. It would appear that this is also what Apple wants.

Meanwhile, I guess we'll have to wait for the next episode, which is probably Apple's appeal at the Ninth Circuit against Judge Pym's ex parte motion in the SB case. You may dislike Judge Orenstein's verdict as much as you want, but you can certainly not ignore it.

@ Stewart Baker fanboys

The man always makes for an excellent argument and I really wish he were on our side. But let's not forget that he is also a former General Counsel to the NSA and at the time one of the most ardent defenders of the Clipper Chip.

@ Niko

Yet, if asked about govtOS, he would explain how it would inevitably get hacked and stolen in a matter of days no hours after Apple finished writing it. Clearly, Apple would be lying about something.

In risk management, there's essentially three different strategies: mitigate, transfer and avoid. Protecting its IP is an essential part of Apple's operations and therefor must be mitigated as much as possible. Writing a crazy dangerous and legally questionable hack for the government definitely is not and therefor from a risk management perspective needs to be avoided.

Robert HudymaMarch 1, 2016 7:00 PM

My understanding from Susan Landau's technical discussion is that the iPhone will backup to the iCloud if the passwords are matched. The FBI changed the password on the phone and requested that Apple reset the iCloud password to prevent outside access to that account.

Furthermore, my understanding is that the the FBI can access iCloud data through a court order.

So why can't both Passwords be now set to the same value, then the iPhone backs up to the iCloud and the FBI gets what is asking for?

EvilKiruMarch 1, 2016 8:11 PM

@ianf:

"fine, you don't trust Zdziarski enought to let his CloudFlare add-on perform a basic browser check, weed out potential script injection tricks, etc., …"

I'm not @CallingYouLateForSupper, but if someone's site security depends on their ability to run client-side javascript in my browser, then they're doing site security wrong.

J. Scott KastenMarch 1, 2016 8:59 PM

Greetings,

I've been a long time follower, and finally in Cryptowars 2.0 (or is that 5.0) I feel the need to write in.

One of the big questions in the whole debate is whether the government should be able to compel a suspect to yield their passwords and private keys. While thinking about this, I came up with a line of argument very different from any that I've heard and thought it worth sharing.

Cryptography is math. Math has slightly different properties than the "real world". In the real world, if I'm challenged in court by a document that I believe is forged, I always have the option of finding a better expert to testify on my behalf. Pressure, DNA, brand of ink, fine motor skills, and any number of other things can be used to call the signature and authenticity of a document into question. Math though, is kind of all or nothing. If I ever yield my passwords and private keys, I permanently loose the ability to prove the evidence presented against me is false. An over zealous prosecution could use those keys to manufacture any number of false documents and there is nothing I can do about it. The only thing that gives me the ability to refute false claims is the privacy of my secrets. Thus, I would think there is a strong argument that evidence collected through any means of obtaining the keys is some sort of "fruit of the poisonous tree" and thus inadmissible in court. Further, if the court is the agent that compelled the suspect to produce the keys, the very action that prevents you from properly defending yourself against false claims, then the court is "biased" or has denied the suspect the right to a fair trial.

The whole escrowed keys / back doors issue is to some degree an extension of this more fundamental problem. I present these thoughts for proper debate as I'm not a lawyer, and I've never heard anyone phrase the issue in quite this way.

Cheers,
-J. Scott Kasten-

Mark MayerMarch 1, 2016 9:55 PM

@J. Scott Kasten
Very interesting and thoughtful post on your first call to the show. I'm no expert, but your points about evidence are worth pondering. Thanks, and don't be so shy in the future.

@Skeptical
Maybe you should live up to your call sign and look up the cites that the DOJ uses. You might find, as Orenstein points out (and is also apparent from the SB case) that the citations don't really say what the government lawyers claim they say. The way the DOJ uses case law cites would be a lot more appropriate for a different jurisdiction: the District Court of Wonderland. Or maybe they should restrict filing their motions to Opposite Day.

@Robert Hudyma
My understanding is that the authentication isn't merely to match passwords, but that an authorization token gets set. When they changed the password, a new token was created. When they changed back, a third token was created. This third token wouldn't match what the iPhone is expecting, so now it can't/won't do the automatic backup.

@Dirk Praet - thanks for the background on Baker. I didn't know that about him (I really don't know anything about him). Ironic that he nails Apple for IMAPI. Did he ever disavow the Clipper chip? I still think there is valid criticism in his article, I just don't see it having that much to do with the case before us. I'm a little troubled with his format. By asking snarky questions, he can imply much without having a factual basis for criticism. I think I'd have more confidence in a straightforward article.

NikoMarch 1, 2016 10:20 PM

@Dirk

I was mostly criticizing the Cory Doctorow article. Your network defenses don't know whether they're protecting something legally questionable and dangerous, or part of your core business. Either Apple's IT security is good enough that it can protect both "backdoors" and its trade secrets, or it's poor enough that they're both likely to be compromised.

Clive RobinsonMarch 2, 2016 5:09 AM

@ J. Scott Kasten,

Cryptography is math. Math has slightly different properties than the "real world".

Actually math is "intangible information" that is a tool to handle "intangible information", that might or might not be an abstraction or imprecise model of how we think the real world is...

As such math is very much outside of what you call the "real world" of the "tangible physical" universe our understanding of which is governed by energy/mass and the forces and speed of light that constrain them.

Information has no physical actuality, however we do impress or modulate it on either energy or matter so that we physical beings can store, communicate or process it.

And you have to clearly understand that "impressing" is not just a two way process it's also one that is entirely seperate from the energy and matter it is being impressed upon.

Which brings us onto your point of,

Pressure, DNA, brand of ink, fine motor skills, and any number of other things can be used to call the signature and authenticity of a document into question.

They are all "physical authenticators" not "information authenticators", that is they stand in testiment to each physical part of a document, but in no way do they stand in testiment to the information that is impressed on the document. It's a crucial point that the judiciary have been strugling with for over a third of a century and there appears to be no resolution in sight.

A major part of the problem is that we are "physical beings" and our assumptions overt or covert are predicated by our interaction in a physical world. Philosophers have been pondering the intangible thoughts and tangible experiance issue for centuries.

The law however can have no such uncertainties in deciding truth and thus actions contrary to law, it can however consider them in any punishment given. Thus although being guilty of an act, you may not be actually be punished.

One of the fundementals of forensics is Locard's Exchange Principle. Put simply it is what "trace evidence" is all about, that is physical objects when interacting leave traces of that interaction on each other. Thus the criminal leaving behind direct and traceable physical evidence such as a hair or fingerprint etc.

But Locard's observation starts to move into troubled waters when it comes to "impressions" such as shoe prints, teeth-marks and even more so with ESDA and the like. Because without direct trace such evidence is circumstantial as it is in reality "information exchange" not "physical exchange", and there is no way to say that a specific object made the impression, and not a duplicate or copy. That is the "information exchanged" is "meta-data" about a physical object and like all information it can be copied or duplicated endlessly with almost trivial ease (hence the rise of manufacturing industry displacing bespoke craft production). Differentiating the meta data of a physical object and a copy of it is dependent on precision of the copy, the measurment process and the impressed and impressing materials.

This problem gets worse the closer the impressing object gets to being information. As information has no physical form, and can be copied and transmitted at virtually no cost, and at any precision you can measure it becomes impossible by examination to determin what is "original information" and an "informational copy". Thus forensics falls back on the physicality of the impressed "carrier medium" which opens a very large loophole for clever minds to exploit.

Which brings us to your two points of,

An over zealous prosecution could use those keys to manufacture any number of false documents and there is nothing I can do about it. The only thing that gives me the ability to refute false claims is the privacy of my secrets.

That is a conclusion that more people are starting to realise and there is no resolution to it.

It's why all non physical evidence should be treated as highly suspect and at best as circumstantial. The bulk of such evidence should in reality be treated as "hearsay evidence" and thus not admissible in court. But it gets worse due to "no chain of custody" on information only the physical object it is stored on.

To see why consider a phone that is kept powered up, there is no way to test that those in who's custody the evidence is in prevented it communicating, timing out or being tripped in some other way and thus changing whilst in their custody.

The solution to that is to power the device down immediatly, the dilemma is that also destroys evidence in inaccessable RAM etc. Also with modern design getting to the power source to disconnect it can trip a trigger and data be destroyed before the battery can be disconnected.

There are no "silver bullet" solutions to this problem thus the reality is information is not evidence in the accepted sense derived from our physical assumptions covert or overt. Thus it is almost always suspect of being "fruit of the poisoned vine".

And no amount of legislation or case law / precedent is going to change that. As far as information is concerned there is no "going dark" it's "always been dark", and no amount of fanciful pretense by the legal and legislative professions is going to change that, no matter how many words they try to hide it with.

The traditional test for criminal proceadings is that the prosecutional burden is "beyond reasonable doubt". With information as the evidence that standard can not be reached. Which is something Comey and Co at the FBI and DOJ don't want you knowing, they know this which is why there is a rise in parallel construction amongst other legal process abuses.

ianfMarch 2, 2016 9:41 AM


@ EvilKiru “if someone's site security [relies on running] client-side javascript in my browser, then they're doing site security wrong.

Sounds logical, except it's shallow logick anchored in generalized FUD about JS.

Because it goes without saying that IF the objective is to prevent e.g. DDoS attacks (as the CloudFlare splash explicitly states) THEN any thwarting of such can not be done on/by the site that's potentially already under attack, BUT has to be offloaded/ redirected to some other source—which is what Jonathan Zdziarski apparently done here.

It then becomes a question of WHOM TO TRUST: a well-known website trusts the Javascript-invoked DDoS checker, so should we trust or distrust its trust, or stick to our guns and trust no one (that way madness lies; and, by analogy: driving every day to work we trust hordes of unknown individuals that they're sane, sober, competent drivers, and not druggies or other automobile-homicidal maniacs. We trust them because there are few alternatives to our greater objectives: to get to, and from, A to B. And yet, every year >1.25M people die in road traffic accidents.)

So while our LateSupperer's initial distrust of a JS-dependent blog could be called justified, his—he's a he as no woman would ever dine late—subsequent SHARING [T]HIS DISTRUST rather than conducting due diligence, is mostly an advertisement of own… narrowmindedness?

    Would it have been better, if the website initially froze every access, then performed this check via opaque server Javascript, before—in vetted cases—returning control to httpd? And better for whom?
Clearly, not for Jonathan Zdziarski, who chose another, webmaster-friendly, solution for his blog's threshold security. The fact that he distrusted the Wordpress engine enough to add another layer of security should be counted in his favour, not against him. And in the end, the LateSupperer and others' momentary JS-unease/ inconvenience stands for nothing compared to any webmaster's potential headache when needing to rebuild a website after a destructive third party attack.


BTW. OT #MakeDonaldDrumpfAgain!

CallMeLateForSupperMarch 2, 2016 11:04 AM

@ianf
"fine, you don't trust Zdziarski enought to let his CloudFlare add-on perform a basic browser check, weed out potential script injection tricks, etc., …"

First, my post was addressed to all who use NoScript (and to readers who otherwise manage JavaScript). My intent was to show them what to expect if they visit it with JS disabled.

Second, this isn't about trusting or distrusting a person; it is about recognizing JavaScript as the threat it is and keeping it disabled. Not that it is material here, but there is no basis for me to either trust or distrust Zdziarski, because I don't know him personally or by reputation

"BUT you still expect Zdziarski to trust you—BECAUSE it is YOU?"

Cool your jets. I neither expressed nor implied any such thing. He has the right to demand JavaScript be enabled and I have the right to refuse and to surf away.

"@ CallingYouLateForSupper"
Was I supposed to retaliate by twisting "ianf"?

WaelMarch 2, 2016 11:10 AM

@CallMeLateForSupper,

Was I supposed to retaliate by twisting "ianf"?

He's already twisted. Spank him hard :)

xd0sMarch 2, 2016 12:17 PM

@J Scott Kasten.

"An over zealous prosecution could use those keys to manufacture any number of false documents and there is nothing I can do about it. The only thing that gives me the ability to refute false claims is the privacy of my secrets."

Interesting point, and I wondered how long it will be until a law firm starts to generate keys and passcodes for clients under ACP. :)

Nick PMarch 2, 2016 12:22 PM

@ EvilKiru

"I'm not @CallingYouLateForSupper, but if someone's site security depends on their ability to run client-side javascript in my browser, then they're doing site security wrong."

If you can't safely run JS in 2016, then you're doing Web surfing wrong. Get a Linux VM or dedicated machine with KVM switch. Web-enabled machines, esp nettops, are cheaper than they've ever been. All untrusted stuff on it.

SpookyMarch 2, 2016 1:47 PM

I'll admit to being morbidly curious about the potential economic fallout: take the world's most profitable tech company, take their most popular and profitable product line, and force them to keep the passcode/passphrase escrowed for the U.S. government. De facto Clipper Chip. Would U.S. customers continue to buy it? People in other countries? I wonder if Apple could sue to recoup their inevitable losses. If folks shifted most of their business over to Android phones, there is little doubt that Google would be similarly targeted. Once there is no longer any safe harbor from U.S. government surveillance, where does that leave us? No cell phones at all? I think an entire generation of Millenials would lapse into a permanent coma. Or perhaps use phones in a more limited fashion, for perfunctory, non-personal communications. You could also hack together your own phone easily enough (though it would probably be outlawed, since it does not include an approved circumvention device, lol). Interesting times, we'll be back to samizdat before you know it...

EvilKiruMarch 2, 2016 2:13 PM

@Nick P: Yes, even in 2016, well-known web sites have been found to serve up malware-laden ads. So it's still prudent to surf sites with ad blocking enabled and with as few sites as possible white-listed to run JS.

"yoshi2"March 2, 2016 2:16 PM

Security is relative, not absolute.

There is already an enormous and widespread false sense of security.
Realworld information security is in reality quite rare.
A large portion of securitive stability is merely accomplished by good behavior, civility, trust, and lack of attention or interest in other people's secrets.

Smartphone users deserve the wakeup call which is the revelation that these devices are not information secure and should not be used as such.
In many ways, smartphones are more like toys than secure devices. And the mass populations of smartphone users could benefit from this awareness.

Granting massive, widespread, and deep(er) access to smartphones to law enforcement could have some interesting and beneficial results:

1) American law enforcement and forensic agencies might suddenly be able to make significant gains against criminals. This benefits (American) society as a whole most of the time.

2) Those criminals who now or later distrust digital technologies because of the sweeping law enforcement advantages will be apprehended or forced to use other means or to cease or postpone their activities.

3) Non-criminals who now or later distrust digital technologies because of the changes will be encouraged to utilise devices and systems of the past which worked just fine to get both work and play done. This could have a positive cultural effect where rampant and reckless dependence upon digital devices will be reduced.

4) The digital security industry will become further stratified separating serious high-level professional encryption from trite lower-level consumer or "prosumer" encryption. This benefits the security industry by further separating inadequate security from adequate security. This also might help to educate the public about the pitfalls of reliance upon encryption in general. And those who choose not to ignore the pitfalls of reliance upon encryption might be somewhat discouraged from and led away from reckless behaviors.

5) Those who cannot tolerate the changes and who require high-level security will be encouraged to jump ship and to find and encourage continued development and dissemination of high-level security techniques and technologies.

6) Some affected users will rethink and reprioritize what they are actually seeking to protect. For example, if you are using a safe to lock up a gun and suddenly your safe is not a secure form of storage, you might rethink your priorities and decide not to own a gun anymore. This might actually enhance your safety more than having a gun around and a false sense of security.


7) The further stratification and separation of serious professional security users from those who are not will aid law enforcement in sociologically and demographically categorizing suspects and persons of interest. The further successes of lawful law enforcement benefits society as a whole and might help to improve morale within law enforcement. Improved morale within law enforcement may further have the effect of improving law enforcement behavior towards citizens. This also benefits society as a whole.

8) The studied reaction of consultants and commentators both local and abroad helps to give professional law enforcement agencies a better idea of who is trustworthy, who is educated, who is coolheaded, and who isn't and in which ways. This also helps to stratify and change the dynamics of interaction between those agencies and those consultants and commentators.

9) If by any chance more rogue activities occur because of security traversal techniques spreading, at least there is also the greater possibility of more non-rogue participants and witnesses and consultants being available to help fight against rogue activities or to reduce the damages incurred. More people understanding how the security technology works, at least in terms of being overridden, creates a more educated culture in which more people can be involved in studying the crimes and aiding the prosecuting of such crimes.

10) If the security traversal techniques spread to other countries where people are oppressed and unfairly apprehended, it is better for such people to be made aware of a false sense of security from relying upon toy and consumer-grade security. This way they can focus their efforts of communication and data storage (or lack of data storage) into techniques which are not vulnerable to the same traversal techniques. This benefits the fight for human rights and freedom by removing potential weaknesses and reliance upon techniques which were never truly secure to begin with.

Security is relative, not absolute.
Peace be with you, me, us.

ianfMarch 2, 2016 2:16 PM


Spooky: “Once there is no longer any safe harbor from U.S. government surveillance, where does that leave us? No cell phones at all?

There's an inkling of a trend among security-conscious EU journalists to switch from smartphone to a flip-phone. Old Nokias that spent the last 5-6 years in a drawer are suddenly redeployed again, and someone I know has bought such a €25 "bundled SIM" phone just in case they become hard to come by. I personally no longer travel with my iPhone, but with a mini Samsung with but FM-radio and a suite of built-in applications: alarms, calculator, calendar, etc. If I could find a smallish & unobtrusive pair of headphones with FM radio and phone capabilities to wear around my neck, I'd use that instead.

Clive RobinsonMarch 2, 2016 2:30 PM

@ Spooky,

You could also hack together your own phone easily enough (though it would probably be outlawed, since it does not include an approved circumvention device, lol).

Take a look at the UK... It's rumored Theresa May MP is a fan of "circumvention" or something similar...

I've not yet had the time to read through the latest version of UK Home Office Minster Theresa May's "Snoopers charter" as it only came out this week. But if it's anything like the last one the UK reserves the right to hack anyone anywhere anytime as long as at some point the device hacked was connected directly or indirectly to a network reachable from the UK (or the UK can claim it was...).

So just remember not to point your kids automatic star finding telescope you got them for Xmas at the International Space station. Now Major Tim Peek is up there it's bound to be an infection point for some elaborate GCHQ "air gap" crossing malware to attack your smart watch or IoT jacuzzi control computer ;-)

BoppingAroundMarch 2, 2016 4:00 PM

ianf,
> driving every day to work we trust hordes of unknown individuals that
> they're sane, sober, competent drivers, and not druggies or other
> automobile-homicidal maniacs

'Expect' might be a better word. Although even then, I recall a driving school
instructor telling us, the pupils, on the last lesson there, 'Obey the traffic
rules yourself but do not take for granted that others will'.

From my personal experience, he was more right than wrong.

GurkaMarch 2, 2016 5:44 PM

I really don’t get this. At all. No matter how many articles I read, none talks about the technical specifics. How can a backdoor possibly be inserted after the fact?

If I encrypt a file, disk, device, whatever, with a good, non-broken algorithm with no backdoors or key escrow, you obviously cannot add a backdoor afterward to retrieve any data.

What is the idea here? Is a phone seized from a suspect and FBI wants to add a backdoor, to return the phone to the suspect in hope that he uses the very phone and enter the password? Sounds more like a key logger to me. And who thinks the suspect hasn't noted the debate on backdoors relating to his phone?

Something is really broken in the way this case is presented. It makes no technical sense whatsoever.

Dirk PraetMarch 2, 2016 6:48 PM

@ Niko

Your network defenses don't know whether they're protecting something legally questionable and dangerous, or part of your core business.

That's not entirely how it works. The foundation of any security strategy is a meticulous enumeration and classification of activities, assets and data, for all of which a risk assessment is made following a formal business impact analysis. In short, for each asset or business process it is determined how a compromise (non-availability, loss, theft, corruption, destruction) will impact the company in terms of business continuity, monetary loss, liabilities, loss of reputation etc. The result is then weighted against the likelihood of such a compromise happening, in function of which a strategy is put in place. Essentially, there's four methods of dealing with risk:

- Accept it: "It is unlikely a tsunami will ever hit our nuclear power plant and we're not going to take any protective measures"
- Transfer it: "We'll take out a comprehensive insurance policy for in case we ever get hit"
- Mitigate it: "We will put a number of security measures in place to prevent a tsunami from hitting the plant" (eg. building high walls)"
- Avoid it: "We're moving somewhere else"

Each of those has an associated cost, the rule of thumb being that the more you wish to protect against risk, the higher the price you'll pay. Note that even accepting risk may carry an indirect cost in losing business over non-compliance with existing legislation or regulation.

An assessment of the requested fbiOS will arguably yield a completely unique risk profile because of its rather particular nature: a prime target for every malicious actor around, serious loss in security reputation, 100% liability and zero revenue (development cost relief at best). Which translates into a completely different protection profile, associated security controls and costs thereof, even when compared to crown jewels like the source code of its operating systems.

So unless Apple hasn't got a clue what they are doing, their defense systems are actually going to be very aware of who is trying to access what, how it's classified, and apply security policies accordingly. It is obvious that Apple is not going to simply accept the risks associated with any compromise of fbiOS, that mitigation costs will be huge due to its high risk profile, and that transfering the risk to someone else does not really seem a valid option either.

The associated cost of avoidance on the other hand consists in possible jail time for Tim Cooke over contempt of court (unlikely), fines for non-compliance with a court order (which they can probably fight pending appeal unless there is a proven urgency) and potential loss of business from users sympathetic to the government's request (probably minimal). If they really believe they stand a good chance of winning this in court, avoidance is the only logical decision from a pure risk management perspective.

@ J. Scott Kasten

An over zealous prosecution could use those keys to manufacture any number of false documents and there is nothing I can do about it. The only thing that gives me the ability to refute false claims is the privacy of my secrets.

Indeed the ultimate nightmare scenario for any defendant if somehow the chain of custody gets broken. And which in the end boils down to the question: to which extent do you trust your government?

BuckMarch 2, 2016 6:54 PM

@ianf

Sounds logical, except it's shallow logick anchored in generalized FUD about JS.
Because it goes without saying
Does it really..?
let his CloudFlare add-on perform a basic browser check, weed out potential script injection tricks, etc.
The server-side should check for any injection tricks (can't trust the client to be honest about that). A basic browser check though? Interesting... So, you say that acceptance or not of cache/cookies/JS, along with IP, referer, and User Agent string (along with every other header) is no longer enough for a basic browser check? Javascript profiling is now necessary too!? I wouldn't have been too surprised about that had I been visiting via the TBB, but it does smell funny... They should've known exactly who I was, yet it appears they'd still like to get to know me even better for some unknown reason!

@Nick P

If you can't safely run JS in 2016, then you're doing Web surfing wrong. Get a Linux VM or dedicated machine with KVM switch. Web-enabled machines, esp nettops, are cheaper than they've ever been. All untrusted stuff on it.
I know you know better, but only for the benefit of others, there are a few unstated assumptions in your thesis here...
  • While hoping that no one has caught a wild 0-day for your VM with the intent of deniably using it en mass
  • The last (cheapo consumer-grade) KVM I bought, happily took certain commands from the keyboard. (and, although I never attempted to exploit it, I don't remember reading about anyone else who's tried the same with any KVMs)
  • Nettops are cheaper than ever, for sure! Cheap enough for a single-use device by the privacy-conscious crowd though?

NikoMarch 2, 2016 8:04 PM

@Dirk

I think we're talking past each other because we're addressing different questions.

1) You're answering: does it make good business sense for Apple to build fbiOS? If people are afraid of LEO surveillance, or if the public overestimates the risk of fbiOS getting into the wild, the answer might be no even if the risk of fbiOS getting into the wild is near zero.

2) I was addressing: is it technically feasible to protect your files? Phrased another way, if you have to mitigate risk, does mitigation have a high probability of success? There are two components to this: is anyone trying to hack your system and if so, how effective are your defenses? Apple has the largest free-float market cap of any company in the world, so you can assume that the most sophisticated hackers in the world have been targeting their systems for years now. That leaves the effectiveness of your defenses. Either Apple's IT security is good enough to protect both fbiOS and their trade secrets, from hackers, or it's poor enough that their trade secrets have already been compromised, in which case they need to inform their shareholders, or it's really poor and they don't even know when they've been hacked or what was taken.

name.withheld.for.obvious.reasonsMarch 2, 2016 8:27 PM

Peeling back the layers of "insert your favorite cliche regarding institutional BS" one could concluded that the real issue is probably related to "AUTONOMIC-POLICING". Microsoft already has the infrastructure laid into their core architecture--they didn't hide it either--look at the EULA for evidence. RIAA, RIPA, DCMA, and DRM will now be enforced without intervention--this I believe is also why congress is dragging its feet with regard to privacy legislation.

On thing that bothers me is that volume license purchasers, i.e. moneyed, will secure a level of privacy that others cannot attain. What does it take to purchase a volume license for Enterprise Windows 10.

Apple may be fighting for way more that we understand at this point...

Clive RobinsonMarch 2, 2016 11:43 PM

@ Gurka,

I really don’t get this. At all. No matter how many articles I read, none talks about the technical specifics. How can a backdoor possibly be inserted after the fact?

Actually there have been several overviews of the technical specifics, and we have talked about them.

But here we go again,

The phone has data stored on it encrypted under keys derived from a secret 256bit AES master key. The FBI want that data and the only feasible way is to get the master key.

The same reasoning applies to all attackers, their target is also to get the secret 256bit AES master key.

Knowing this Apple does not permanently store the 256bit AES key on the phone or anywhere else. So there is no "key store" to target.

What Apple does is to get the OS to build the key when the phone is unlocked from two secrets. This is done in a way that makes knowing either secret alone not helpfull. Thus the FBI and all other attackers have to take a step back and attack the way the master key is built.

To do this you need both secrets. The first is a random number unique to the phone. The second is the user passphrase. Thus you need the physical phone to get the first secret and the users mind to get the second secret.

The first secret is stored in the phone in a way that makes accessing it a,very difficult and risky physical process (the "acid and laser" technique some like ARS have mentioned). The secret is also sufficiently large and random that guessing or "brute forcing" it like the 256bit AES key is so improbable in human lifetime scales it's effectivly impossible. Thus you not only need the phone you need it with the first secret intact.

However Apple know, like most security researchers do, that humans have very bad memories and can not even remember 4digit random numbers reliably. Thus humans generaly use very very weak passphrases which is why you see these password guessing contests.

All of which means the second secret in the users memory is likely to be brute forcible in reasonable human time scales.

Apple like most other security researchers know this, so they took steps to render brute force guessing highly improbable. Firstly they made entering the passphrase "manual only", secondly they made trying each guess take successively longer and thirdly they gave an option to the user so that if selected after ten bad guesses the phone would overwrite the first secret, thus making access to the data as hard as brute forcing the 256bit AES master key.

But Apple had another option for the user which was to "backup the data to iCloud". If this was selected by the user then the backup of the data was not encrypted via the two secrets, which gave a way to bypass them. But for some reason on FBI orders the owner of the phone SB County made changes to stop automated backups working, thus effectivly slamming that door (if it was actually open). Which is why along with other FBI behaviour quite a few people believe this was deliberatly orchestrated by the FBI with considerable malice to force the use of the AWA on Apple as a very public "pissing contest" to show whose are brass and whose are steel.

What the AWA application suggests is Apple write a new OS to update the iPhone with that will bybass the protections against brut forcing the passphrase. The FBI also add a load of disprovable nonsence about it being unique to that single iPhone.

The reality is that unless Apple have taken other protective steps they are not talking about for "Trade Secret" reasons, the part of such a new OS that bypasses the security features will be generic and wrapping it up in extra code to try to make it unique to that phone will not make the part of the code that bypasses the passphrass protection any less generic.

But as the FBI and DOJ know they can not legaly give a promise to Apple that this new bypass OS will not be made compleatly public at some point in the future.

The problem is that if it is possible to write the bypass code, once the code is known to exist and work, there is nothing to stop other LEOs in the US and other brutal regimes going down the same route and thus Apple would be subject to a very undue burden. And as the FBI very deliberately chose to make it all public, it adds further fuel to the notion it's a pissing contest the aim of which is to destroy Apple's credability in the world markets and thus set an example to the worlds manufacturers that you "Don't mess with the Feds".

The thing is it's not just the LEO's and brutal regimes this generic bypass code will help, it's many other udesirables as well. The FBI want the code to run from RAM, and without going into details this could easily be to sidestep other unstated protection mechanisms. If the generic part of this update that bypasses the passphrase security does run from within RAM you then have to think are there other ways to get the code into RAM and get it to run without using the update process?

History tells us that yes there almost definitely is, and that it will not take even "low hanging fruit malware" criminals long to find it, let alone more sophisticated undesirables...

SpookyMarch 3, 2016 2:53 AM

@ ianf,

Indeed, that's probably a very prudent idea. I'd love a small, stateless phone designed to retain no information at all. Its tiny, well-tested core of system software would reside safely inside an actual ROM. Power cycle is equivalent to factory reset. Software updates require a soldering iron. No camera, no display, no Apps. An FM radio would make a fine addition! A quaint box no larger than a pack of playing cards, with HP 11c style buttons, a headset jack, an FM tuning dial and an SPST illuminated rocker switch for power. And a battery standby time of months...

Flip phones are starting to become a rare sight inside the U.S. as common carriers gradually force customer upgrades to phones that support 3g/4g. My elderly parents ended up moving to smart phones several months ago; you've never heard such whining... Honestly, it's hard to blame them. If I had my way, phones would continue to be simple, single-function devices, full stop.


@ Clive,

I know you say this in jest, but almost nothing would surprise me at this point! NSA and GCHQ are so accustomed to scratching each other's backs; constitutional restrictions mean nothing to them. What one cannot do, the other one can (and does). Between those two gluttons, they always get a full take and they pretty well know it. I'll have a go at reading the revised Snooper's charter, though I fully expect it to read as you say: carte blanche for expanded powers of surveillance. Sadly, the best way to avoid their dragnet is to avoid most newer technologies and products. That's tough to manage.


Cheers,
Spooky


Dirk PraetMarch 3, 2016 7:37 AM

@ Niko

I was addressing: is it technically feasible to protect your files?

The short answer to that is a yes, but with the limitation that especially in this field there unfortunately never is such a thing as a 100% bulletproof solution. What I'm saying is that you're oversimplifying the issue by narrowing it down to a strictly technical "one size fits all" network defense problem, which it isn't.

A couple of years ago, the White House received an on-line petition to build a Death Star, signed by a massive amount of people. The official - and truely epic - reply by Paul Shawcross, Chief of the Science and Space Branch at the White House Office of Management and Budget, was the following:

- The construction of the Death Star has been estimated to cost more than $850,000,000,000,000,000 ($850 quadrillion). We're working hard to reduce the deficit, not expand it.
- The Administration does not support blowing up planets.
- Why would we spend countless taxpayer dollars on a Death Star with a fundamental flaw that can be exploited by a one-man starship?

Nowhere did he say that it was technically not feasible.

@ Nick P, @ ianf

Get a Linux VM or dedicated machine with KVM switch. Web-enabled machines, esp nettops, are cheaper than they've ever been.

Or run TAILS on a network isolated old laptop stuffed with RAM. Way less hastle.

ianfMarch 3, 2016 7:38 AM


@ Buck […] The server-side should check for any injection tricks (can't trust the client to be honest about that).

Listen, neither of us knows anything about the threat vector(s) that made Jonathan Zdziarski turn to an off-site browser checker. We don't know how that blog is run, nor the degree of his control over the server… I suppose if I invested a couple of hours, I could find /some of/ that out, but then to what use? It's a Wordpress blog running under his own domain – and that's all we need to know.

Ultimately, as I said, it comes down to the question of a hierarchy of TRUST. There are websites that we invest with our trust—this blog is one, and not because of its absence of JavaScript—and those where we'd be well advised to execute caution. Zdziarski's profile isn't exactly unknown here, moreover the blog has been linked to 5-10 times in February alone. His easy to check googlerepute is that of someone who's EVIDENTLY COMPETENT in tech matters, hence (rationally thinking: presumably) SOPHISTICATED enough not to spread virii etc.

We can argue back and forth about how JD should have done to satisfy everyone's pet web access phobias, but only he knew what had to be done, and at what cost to himself. It's not like we live in a perfect world with no thresholds anywhere… you take away one, another will fill up the vacuum.

That "need JS for CloudFlare" one was enough of a notice to invoke this generic trust-or-distrust dilemma; only here, rather than execute his “right to refuse and to surf away,” our LateSupperer chose to whine about it (so those of us running NoScript would be forewarned—mission accomplished, and we're now so much the wiser for it).

    This enough of an elaboration, or would you still require a ruminative parable of Clive-Robinsonian proportions?

p.s. @CallMeLateForSupper: "everything is copy." —Nora Ephron's mother.

ianfMarch 3, 2016 8:49 AM


@ Spooky, in Europe there is a special class of (usually flip, occasionally chunkily robust) palm-sized mobile phones designed for older folks, those with falling eyesight and memory. They sport large buttons, EXTRA LARGE DISPLAY TEXT, usually a red urgent help needed function button (sending prerecorded message and/or text—I think—to a designated number), and not much else. Underneath they're basically 2G Nokia/equiv. with a redesigned shell. No model with FM radio that I've seen though… I'm sure you can get them in America as well, only via dedicated retail channels (see ads in Golden Autumn Magazine etc).


@ Dirk Praet […] “A couple of years ago, the White House received an on-line petition to build a Death Star, signed by a massive amount of people.

    A generation of sheep petitioning the top sheepdogs to be more like wolfs in sheeps' clothing?

[…] run TAILS on a network isolated old laptop stuffed with RAM. Way less hastle.

You misspoke: way too much hassle. Not the TAILS portion but the isolated old RAM-filled laptop. Which will give up after a while, need to be replaced, etc. I strive to simplify my electronic life, not add complexity to it. Browsing the web implies bookmarking, saving pictures and quoting text snippets, sending mail with shared links from it for later/ outside use – how would that be accomplished in a secure toolchain fashion from a "separate VM unit for untrusted stuff?"

p.s. anyone willing to exchange a brand new latest-gen GSM 64Gb iPhone 6 Plus for a factory-sealed 64Gb iPhone 6 + unblemished 2yo 16Gb iPad Mini (pre Touch ID) give me a holler.

GurkaMarch 3, 2016 11:36 AM

@Clive Robinson: Thank you for your reply. I still don't get it. If Apples key/pin stretching algorithm has artificial delays not needed for key stretching, then I would say the backdoor is already in place.

Please note that I agree fully on your principal analysis, I'm totally understand that a backdoor will lessen the security for everybody and that backdoors is unacceptable and can be used by everyone from nation states to criminals to hackers.

But, as I said, I cannot see that you can put a backdoor in after the fact. This is principally impossible, otherwise crypto analysis would not be a field of research. Just add a backdoor and you are in (perhaps after a trivial brute force).

A "backdoor" to my knowledge is not a thing that depends on key strength, passphrase strength or algorithm strength but something that bypasses the security all together, by using alternative keys or other weaknesses. And not something you can add on later. If data is encrypted with a non-faulty algorithm and proper protocols, the data is secured. The only thing you can do is snooping or keylogging when the owner is accessing the device and enters the passphrase.

So, Apple must already have screwed up the security beforehand for this to work. If I understand you correctly, that would in this case be the key stretching algorithm that uses non necessary delays. So the only thing FBI really has to do is to use the algorithm without the delays, running on a ordinary computer, to brute force a weak passphrase. Extracting the encrypted data (in encrypted form) from a phones flash memory to an image on a computer should be trivial.

I don't get this "backdoor" thing. Either the security is already broken in principle and the convoluted access is misnomered (is that a word?) as a "backdoor" or FBI is asking for something that is mathematical impossible.

So, I take it that FBI want Apple to assist them to exploit an already present vulnerability. This implies that FBI can make this totally on their own, as can everyone else and iphones are already accessible for hacker, nation states, terrorist group and so on.

The old saying that physical access means that all bets is off, presupposes that the rightful user some time in the future access his or her device and enters the password and this is snooped by a keylogger or similar. If someone steals/seizes a device and can access the data without the owners assistance, the security simply must had to be broken _before_ the user started do encrypt things.

Please don't get me wrong here: but if FBI prevails in this, the only logical conclusion is that Apples security is bogus. It cannot be FBIs fault. Can FBI do it, so can everyone else and FBI is not the one that put the flaws in. My foggy understanding at this point is that the key stretch algorithm is faulty with artificial delays instead of computably dictated ones. And, it still depends on the user to select a dumb password/passphrase/pin or whatever. Exploiting dumb passwords is hardly a "backdoor". Removing unnecessary delays is hardly a "backdoor" either. Or, if one must use the term "backdoor", the backdoor was already there, inserted by Apple, in the stock OS.

Nick PMarch 3, 2016 12:00 PM

@ Buck

"While hoping that no one has caught a wild 0-day for your VM with the intent of deniably using it en mass"

That is a risk. The 0-days are *mostly* discovered in other software. So, one's risk is at least lower. Anyone lucky enough to be using a separation kernel or high-quality microkernel will have a *very tiny* VM. So, a few rounds of hunting might dry up all the vulnerabilities. Not happening with browsers and such any time soon.

"The last (cheapo consumer-grade) KVM I bought, happily took certain commands from the keyboard. (and, although I never attempted to exploit it, I don't remember reading about anyone else who's tried the same with any KVMs)"

There's KVM's and there's security-focused KVM's. I advocate the latter. If one can't get them, then get the simplest, dumbest, cheapest KVM you can get that operates solely on a physical level with button presses on the device itself. That's what I used to use. Really cheap. Besides, who is getting 0-days in KVM's?

"Nettops are cheaper than ever, for sure! Cheap enough for a single-use device by the privacy-conscious crowd though?"

That one isn't fair. Many people do almost everything in web browsers. These devices (and Chromebooks) exist due to demand from that market. So, it's a worthwhile investment to have one dedicated to web browsing. Having a cheap throwaway doing it optionally with things like LiveCD's is even more justified given all the malware. For budget conscious, one can split their money between two machines: a good one built from instructions on the Internet by PC builders; a disposable one just good enough for Internet and maybe streaming video.

@ Dirk Praet

"Or run TAILS on a network isolated old laptop stuffed with RAM. Way less hastle."

That's complimentary to my solution. However, TAILS makes one stand out and high-strength attackers are focusing on it. So, it reduces one set of risks while increasing another. A security-focused Linux that's not as popular with Tor optional would be better. A Linux LiveCD by itself is less likely to be compromised. One can also apply SELinux-style isolation and so on. Many options for the untrusted laptop.

BuckMarch 3, 2016 5:39 PM

@ianf

I don't think anyone is too concerned with the blog author's motives or intent... People trying to preserve their privacy online are more worried about traffic correlation techniques being performed by a Global Active Adversary. See:
Issues with corporate censorship and mass surveillance
Panopticlick
evercookie

@Nick P

Fair enough on the third point. "Single-use" was definitely an exaggeration on my part! Still, if you're trying to maintain anonymity online, you'll probably want multiple profiles (one for employment, one for family & friends, one for security research, one or more for financial institutions, one or more for online purchases). I agree that separate netbooks is the way to go, but the cost can still add up pretty quickly!

I was looking more into KVM's, and I found a comment from you on Krebs in 2012 that referenced an EAL5 KVM. ;-) The Common Criteria evaluated switches do look pretty expensive. Data isolation and firmware (if any) would be a concern with these things - physical toggle is absolutely necessary. Shouldn't actually be to hard to build one... Though if you're using a USB keyboard, it might not matter how secure your KVM is - Apple Keyboard Firmware Hack Demonstrated

Although, I really wouldn't be surprised if a number of PS/2 keyboards had buffer overflows in their controllers' firmware. I found some custom code for a bunch of them, but no luck with my searches for binary dumps of the stock firmware.

Dirk PraetMarch 3, 2016 6:51 PM

@ Nick P

However, TAILS makes one stand out and high-strength attackers are focusing on it

True, and they're most welcome to waste as many resources as possible trying to monitor whatever it is I'm doing with it. As it matured (they're on Jessie now), I have come to use it as my default OS for day to day surfing, messaging and other ordinary stuff I like to conduct in a more or less private and anonymous way without being tracked by world and dog. And for which it really is an excellent out-of-the-box solution with regular updates.

For matters that need an additional level of security, I take to other solutions, most of which - like the ones you mention - have been discussed on this forum on previous occasions. The only thing currently ticking me off with TAILS is that apps like Onionshare and Ricochet still dont't properly work with it.

@ ianf

Not the TAILS portion but the isolated old RAM-filled laptop. Which will give up after a while, need to be replaced, etc.

I don't see the problem. I get people asking me to have a look at their problematic (aging) laptops all of the time. In many cases, it's just a more cost efficient solution for them to buy a new one instead of replacing/upgrading parts on top of my service fee. So it often happens that they just give it up over a fine bottle of whisky or sell it to me really cheap in exchange for assistance with setting up the new one. I've got an entire stash of such machines in one of my closets.

how would that be accomplished in a secure toolchain fashion from a "separate VM unit for untrusted stuff?"

You could save to external media or use a local network router with an encrypted storage device attached to it.

Clive RobinsonMarch 4, 2016 8:52 AM

@ Gurka,

... the only logical conclusion is that Apples security is bogus.

It's not just Apple's security it's everybody's security when it comes to consumer products.

The reason is the customers are by and large fairly flawed humans, and human failings are the root cause of nearly all security issues.

In this case to human failings are at the bottom of it,

1, The human minds cognitive inacuracy.
2, The problem of "it's not my fault" syndrome.

Because humans make mistakes, things go wrong, peoples feelings get hurt and fingers get pointed, names get called and biased stories get told. Ask anybody in a customer facing part of a service industry and they will have war stories about "(ab)users" and "Custards" to fill a quiet afternoon or four. To make matters worse there are the triumptive of blaim givers "Sales, Marketing and Advertising" who only agree on one thing "Customers pay, they must be right, even when they are wrong" which gives the dreaded "Customer is King" and other such meaningless managment speak.

So when a Custard comes in and complains that they can nolonger get at their data because "of the stupid design", the triumptive demand the engineers don't let this happen... And as engineers opinions are worth less than "roach dung" that is what engineering have to do. So because some customers do stupid things the Engineers have to "backdoor" their own products to keep the triumptive with their "every customer is precious" idioms happy.

Which brings me to what is and is not a "backdoor" the simple and base definition is "Any deliberate method or process that bypasses a security method or process" which is only different to that of a vulnerability where the word "deliberate" is left out. That is at a base level a "backdoor" is a vulnerability that has been made and inserted by conscious design. In practice a backdoor means many different things to many different people, and can have a technology or method bias.

What Apple is saying is that the encrypted data on the iPhone is "secure" which is true enough based on our current understanding of AES. That is the data is protected by keys derived from a 256bit AES master key. Further Apple are claiming that the master key is not stored on the iPhone when it's locked, nore any other place which it's not, so is factually correct. They also claim that the 256bit AES master key is built from two secrets one built into the phone the other known only to the user. They further claim that the secret built into the phone can be deleted in various ways to make it not possible to build the 256bit master key... All true.

What Apple have not mentioned is that the second secret --that the user knows-- is a real lame duck due to human failings (customers don't like being called usless / stupid / idiots / fools / etc ).

Security proffessionals and Apple know that even a four digit pin code is a challenge to a significant fraction of their customers. Worse they know that of the 10,000 pins only a couple of hundred will actually be used by the majority (ie birthdays, famous days, mathmatical constants and easy numbers like 1000 or the equivalent in a different base for the geeks). Further Apple and security proffessionals know humans are oh so lazy, if there is a way to make life easy then that's the way most humans will go, so you would expect 1111, 3333, 5555, 7777, 9999 and 0000 to be high on the frequently used PIN list. It's this knowledge frequently shown to be true by PassWord Cracking Contests, that show just how easily tech beats brains when it comes to security. But to make it worse the PIN due to frequent use wears away the numbers it uses more than the other numbers, so often the four digits are known it's just the order that remains unknown...

Knowing this real human failing Apple took steps to limit the damage it causes. That is the second secret --that the user knows-- is not realy a secret because it's just a guess or two away in many cases, that is potentialy the PIN entropy is five bits or less. Further Apple know that even if the user was given full alpha numeric upto fifty chars in length the average user will use a four digit PIN at best, and won't set the option to deleate the first secret if too many guesses are made. Thus they added an increasing time between retrys and only passphrase entry from the screen.

That is what we know, from that point on we are guessing based on what appears to be likely assumptions. Which is what the FBI / DOJ have done for their submission to the magistrate, and at this point Apple are saying little or nothing for good reason.

When it comes to this sort of security it realy has a high degree of obsfication or obscurity at it's base and Apple quite rightly regard it as a Trade Secret. Thus Apple give the equivalent of the nutural "no comment" that the FBI / DOJ / CIA / NSA / MIL and all LEO's do when questioned.

It's at this point I enter the anals of "Philosophical Cruelty to Cats" that many prior have entered in search of analogies of which Shrodinger and his cat in a box is one of the better known. It is highly likely that the assumption that Apples security on the passphrase can be bypassed is not just that of those outside of Apple but those within Apple as well. That is nobody and I do mean nobody there has actually done anything other than make assumptions about the existing code. The reason is much like that of Sportsmen like boxers "psyching the opponent" if somebody looks unbeatable, then their opponent looses an edge as they subconciously believe the "invincibility" and don't actually go all out. You see this in research if people don't know something can be done they try not to do it, that is they move forward in baby steps not major jumps, it's why we have the phrase "the leading edge is the bleeding edge" the little steps are resource expensive, the big steps are reputation expensive if wrong. So Apples passphrase protecting code "is the assumed cat in the box" you realy don't know untill you lift the lid firstly if it's there and secondly if it's a pussy or hell cat thus what sort of fight you will get with it. Apple want people to think it's not just a hellcat but fully soul stealing demonic as well, so that people won't try. But as I noted the other day we use the Onion as an analogy where each of the many layers is "assumed" to be as equally difficult. But as history shows in reality an egg would be a better analogy, because although when you crack the shell you have a mess on your hands, it's realy easy to get the bits you want. The latest such is the Special DROWN attack which Mat Green has blogged about. The original DROWN attack required high levels of resources and as such was a High or State Level attack. However on having cracked the initial step the investigators quickly found ways to reduce the resources needed down to that you might expect of a back bedroom scriptkiddy.

Now as far as assumptions go you have to make your mind up who was in the driving seat when Apple developed their passphrase protecting code, the security people or accounts people? That is what level of resources went into it's development?

As I've mentioned before there are various techniques well known to Copy Protection and Malware developers, that can be used to make bypassing the passphrase code to a level where the "Acid & Laser" physical attack would be a better use of resources. If the security people were in the driving seat that's what I would expect Apple to have gone after development wise. If however it's the accountants then just a quick "grasp and glance" at the source code would be sufficient. At which point on this line the iPhone actually rests is unknown, the FBI want people to think it's a quick "feal of the source" Apple want you to think more at the "get out the nitric". It's likely that as Apple are going down the hardware "enclave" route that they actually put some effort into making the code secure in of it's self in various ways prior to adding several layers of obsfication on top of that to make looking at disassembly output a real challenge, it's certainly what I would do, and as it's code it has a one off development cost, not as with the hardware enclave a per item manufactured cost.

Which brings us around to the two step update whereby the existing passphrase protection code can be replaced in some way. My guess is that this is a vulnerability, not a backdoor and was caused by the triumptive that causes Customer Service such pain with their "Customer is always right" mantra.

At a pure guess I would say that the security people were aware of the update process and specified that it did not cover certain asspects of the security code. But... the triumptive backed by manufacturing with "recall scare" tactics had the security people over ruled. The result a great big gaping security vulnerability that is going to prove quite costly to Apple that could easily have been avoided. It's also why I said the otherday that I would expect the next general release of an iOS update to contain a piece of code putting things the way the security people originaly intended.

Of course Apple could go down the Amazon way, "just capitulate for a quiet life" and turn off all customer security... Which is not the way to get the confidence of your customers. It will be interesting to see what Google and their manufacturing customers do with Android and Nokia/MS and RIM.

That's the way I currently see things as an outsider.

But on to your specific concerns,

But, as I said, I cannot see that you can put a backdoor in after the fact.

I suspect and the rest of your paragraph suggests is the case that you are limiting what you consider a backdoor to the actual encryption algorithm. In which case you would be correct. But as I explained above, the FBI is not interested in backdooring the encryption, the key generation, first secret protection or even the passphrase expansion / streaching. All the FBI say they want is to backdoor the passphrase retry protection so they can brute force it. Which they assume unrolls all the other security protections. But in reality I don't think they actually care wether it does or not as long as they think they can make Jon Q Citizen think Apple is "aiding terrorists, thus is a traitor to the US people" they damage Apple and scare the other Silicon Valley Corps. Better still if they can "divide and conquer" and on the process inflict more damage on Apple (please note I'm very far from being a "fan boi" and the last Apple product I purchased was an Apple ][ back befor IBM PCs had been thought of).

So, Apple must already have screwed up the security beforehand for this to work.

As I noted above yes they appear to have with the update function, which as I also noted they can partialy fix with a global update to the updater code to stop the security code in the OS being overwritten in future.

So the only thing FBI really has to do is to use the algorithm without the delays, running on a ordinary computer, to brute force a weak passphrase.

Yes, but due to the first secret it has to be run on the actual phone it's self, not an ordinary computer. As I noted above the passphrase has little real entropy just a few bits, it's why a brute force search of the passphrase space might work if the passphrase is short. The bulk of the entropy that makes a brute force search on the 256bit AES key impractical is in that first secret, it's getting at the first secret to get the master key generation to work that is the target if data recovery is the aim of the FBIs actions (which as I noted it most probably is not).

So, I take it that FBI want Apple to assist them to exploit an already present vulnerability. This implies that FBI can make this totally on their own, as can everyone else and iphones are already accessible for hacker, nation states, terrorist group and so on.

Yes the best interpretation is "the FBI are cheapskates" and would rather use the DOJ budget and Apples profits rather than do it themselves.

However there are several assumptions in there,

1, A disassembly of the iOS code that protects the passphrase is reliably possible.

2, That any bypass code written will run in RAM.

3, An attacker has another way to get the code into RAM and execute.

4, Or Apple have been slipshod with the way they protect their code signing keys and process.

The important thing to note is the vulnerability in question on the updater in iOS is realy only exploitable by Apple if 4 is not true.

If someone steals/seizes a device and can access the data without the owners assistance, the security simply must had to be broken _before_ the user started to encrypt things.

No, the security rests on the two secrets, the first appears reasonably well protected, the second --the user passphrase-- the protection depends largely on the user. The FBI know that what they propose will only work if the user had a short or weak passphrase. If the user passphrase is strong then the brute force may well still be going when the FBI is no more.

It's the same with,

The old saying that physical access means that all bets are off, presupposes that the rightful user at some time in the future access his or her device and enters the password and this is snooped by a keylogger or similar.

The first secret is required to build the 256bit AES masterckey, this is made to be a "write only" / "one way" / "hidden variable" with the bulk of the entropy of the AES key. If it can not be read out of the device then it has to be bypassed some way to get the master key built in RAM. There are quite a few attacks via physical side channels that might leak the key by remote interogation, but if somebody knows this can be done they've certainly not said anything.

Likewise if you can "shoulder surf" the user when they unlock or use the phone then you may see their passphrase or see individual file content (that might alow some kind of plaintext crypto analysis, but is unlikely).

The problem with a key logger is actually getting it on the phone in the first place, if you can, you could as easily put on code that would read the keys out of RAM and either send them out or store them somewhere where they won't get wiped when the phone locks.

Again there might be an RF side channel where key presses could be detected a few feet or so from a user as they unlock the phone, and this might also get cross modulated onto the GSM etc carrier when the phone rings etc. You would have to experiment and I'm reasonably sure that some US Gov agencies have done this as a matter of "normal investigations". The NSA would have certainly done it for the "ObamaBerry".

With regards,

My foggy understanding at this point is that the key stretch algorithm is faulty with artificial delays instead of computably dictated ones.

As far as I know the key stretch algorithm does what it was designed to do, which is add an 80mS or there abouts delay. The time interval chosen is a compromise and is based on how long can you delay without causing the valid user to complain of slow response that gave rise to lost calls etc. The longer delays when incorrect passphrases are entered, are not nor reasonably could be based on anything other than a simple timer.

As for,

Please don't get me wrong here: but if FBI prevails in this, the only logical conclusion is that Apples security is bogus.

No it's not bogus, it's based on compromises that were reasonable at the design time. The vulnerability that people are talking about is predicated on access to Apple's code signing key. If the FBI can not get to that key or force Apple to use it then this whole case is just the FBI using FUD to get it's very undemocratic way to scare Silicon Valley Corps.

This FBI attack can only work if the signing key or direct access to it is in the US jurisdiction. One way Apple could have avoided the issue was by not having the signing key usable in any one jurisdiction. And there are a couple of ways this could be done. Firstly reorganise the company structure such that even though "head-office" is in US jurisdiction the company doing the signing was both in a seperate jurisdiction and not sub-ordinate in the legal sense to head-office (such setups are common for tax reduction purposes so are known entities). Secondly and more importantly split the signing key in several jurisdictions, such that the updater will only work with all parts signing. A very simple way is multiple signing such that there are say three public keys in the updater and the code has to be signed by all three private keys that are in different jurisdictions. However such a simple system has a number of disadvantages, which is why combining a key sharing system with a public key system such that there are three or more private keys and a single public key in the updater is a better way to go. You can build such a system using RSA, but ECC would be better, it's something I'm looking into but my maths is not what it once was.


GurkaMarch 4, 2016 11:21 AM

@Clive Robinson: Thank you once more for a long and interesting answer! I agree with most of it and the things I'm not agreeing with just boils down to semantics. I'm still not prepared to call it a "backdoor" if it's something you can put in after the fact and I don't consider a weak password as broken security, just as user error.

If I understand you correctly we agree that the weak part is the key stretch, but we differ in the specifics. I cannot image why any designer would insert a delay (of 80 ms or whatever) instead of just increase the computational load of the key stretch. And 80 ms seems way to low. Surely you don't need to enter it to answer a call? That seems stupid. The security should secure the data, not the essential function of the phone, call making. No user is gonna enter a thing just to answer.

I was under the impression that the key were asked for upon boot, and pins just secure screen and act as a keylock. As it works on ordinary encryption with full disk encryption and a unrelated screensaver with a unrelated password. This seems to me "this is how you do it". No one can add something to my computer to get my data without me being stupid and entering the boot password when a key logger is installed.

Surly this key stretch can be extended by necessary computations to several second at boot time, and the key can be may tens of digit/numbers, as it should.

But I think I understand the concept now. Its called "compromise" and it is Apples fault. Sadly, because it would be lot easier to dismiss FBI as just math denialist. And again, as FBI can do it, so can everyone else. So this part of the debate is over, iphones are unsecure. My thoughts here is that its better when bad security fails than living on and nobody knows of it. So, even if I'm NOT taking FBIs side, certainly not, I'm cannot possible see that one could side with Apple. Two wrongs. Apple made a "compromise" by totally cripple security for everyone aware of those things, just to add a superficial security to the four digit morons.

This is to say, the public "debate" of security experts and reality vs FBI and "national security" halfwits is relevant and you can take sides (the right one or the math denailist/backdoor-advocates) but this doesn't really apply to this case. Because Apples system were already broken and FBI only wants to exploit it. FBI just is to lazy to implement their own application to exploit a backdoor already in place. There is no security philosophy or principles to learn from this.

And one more thing. If Apples key stretch algorithm is a company secret, how do we know they doing it right? This should obviously be open source, standardized and peer-reviewed. And FBI then could implement it easy, without harassing Apple. I feel I must thank FBI for exposing this lack of security and yet give me another reason not to trust proprietary technology.

But again, thank you for explaining the specifics! I now understand that the "backdoor" thing was about a thing I'm normally not consider to be a "backdoor" _added_ afterward, so it was just semantics. So, again, it was not the world going mad and thinking you can backdoor security after the fact, but just a slightly different and for me novel use of the term. The condition to weaken the security existed beforehand and this news and public debate is only about exploiting it. :)

Marcos El MaloMarch 4, 2016 12:27 PM

I just came up with a solution that respects my right to secure my shit while also respecting the right of people to be dumbasses.

The National Voluntary Password Databank

The idea should be clear from the title. "Patriotic Americans" and other useful idiots can register their passwords with the government, who will keep them in a "secure" database on "secure" government servers. To increase public acceptance, there will be a mechanism in place for users to retrieve forgotten passwords. May I suggest using offshore customer service reps, so passwords can be retrieved with a simple phone call?*

*Security questions can be "What is the first letter in FBI?" and similar, so as not to over task users.

Some jerk stole my idea on Medium. https://medium.com/p/c9a49a6241f5

Sancho_PMarch 4, 2016 5:35 PM


@Gurka, re your posting from March 3, 11:36 AM

Your “I still don’t get it” reminds me of myself - when I don’t want to get it.
I was told my stubbornness is a matter of age …
- but I don’t think that’s true in your case ;-)

However, you are right by questioning the “backdoor” terminology, it’s wrong here.
In fact it is the front door the FBI wants to exploit.

Let me try an analogy:
Apple has designed a building with a secure front door and user choice to use a simple or strong key. Even with the simple key [1] there is more than one safeguard in place so that any adversary (in this case the FBI) can’t simply pick that lock+door construct, it might blow up the whole building.

Now the FBI wants Apple to to develop and deploy - and that’s crucial - a system to replace that front door. The new door must not have any safeguards but the cylinder must remain (same key [2]).
Additionally the FBI wants Apple to develop and deploy a simple data access [3] to the cylinder so they can easily pick the lock using a tool instead of their slow fingers.

But again: “a system to replace”, not just to replace that particular door [4].

Yes, the FBI knows “the math is working” (AES256) so they have to attack the weakest point which is always “security by obscurity”, in this case how Apple designed the system to protect the lock and door.

That said, your

”So, I take it that FBI want Apple to assist them to exploit an already present vulnerability. This implies that FBI can make this totally on their own, as can everyone else and iphones are already accessible for hacker, nation states, terrorist group and so on.”

is wrong:
- There is no existing vulnerability, it must be developed and deployed [5] by Apple.
- The FBI or any other attacker can’t develop that system without deep knowledge and access to Apple’s intellectual and physical property.

Rem:
It might be that the NSA / CIA / … have already stolen (sorry, must use the word, collected would be an euphemism) this IP and the keys from an american business.
However, I guess it would be nearly impossible and too risky (even for the NSA) to try to utilize that stuff in this unimportant (dead) case.

Again: The system doesn’t exist yet, Apple would have to develop it.
The crux: Once developed you can’t delete the knowledge without killing the developers.

In other paragraphs of that posting you contradict yourself (e.g. ”… running on a ordinary computer, to brute force a weak passphrase. Extracting the encrypted data (in encrypted form) …” ) or you stumble about your own feet with your lengthy reasoning (reminds me of @Skeptical).


Re your posting from today 11:21 AM
As I understood the 80ms are not an artificial / designed delay but (Apple’s ?) accepted upper time limit to generate and check the AES256 key from the (probably strong) user key. There is no “stretch” at the first attempt.

(For the rest of your post I can’t comment, too many loops.)

—> Sorry for my long posting, I really try to avoid that.

[1] A user choice, not a backdoor.
[2] It's part of the security system, a new key would render the content useless.
[3] This might be called a backdoor, but it wasn’t there before.
[4] They say “just for this particular door” which is technically questionable.
[5] The “update without user consent” is a general weakness of the front door, but it is only exploitable together with a weak user pwd.
A strong user pwd would have completely avoided the whole discussion.

Sancho_PMarch 4, 2016 5:42 PM


@Dirk Praet

There is only a small (?) issue with Tails, it doesn’t properly run from disk/stick on some modern desktops without doing pixie magic. While other distributions (but not all) readily run from DVD, Tails 2.0/2.0.1/2.2rc1 looks like washed easter eggs with certain systems / graphic cards / displays (16:9).
Of course there’s no hint what to do with an “Unknown Display”.
Until this is solved some will continue to call it Fails.

Btw. both 2.0.1 and the 2.2 rc1 ISO downloads are using http connection.
Mentioning MITM attack, OpenPGP and “could be a fake” is scary, at least 80% of the folks I know aren’t ready to work through the linked trusting page.
I know it’s not easy and it bugs me that I don’t have a solution.

Sancho_PMarch 4, 2016 5:49 PM


@Clive Robinson, re Apple customer service overruled security engineers

Let me offer a very personal view:
The additional security was designed with the HW and most of the SW in place, for existing phones, say in a hasty, impulsive fix.
It was demanded by His Highness, without to consider user’s or customer service’s pain.
This can be seen as there was no intended cure in case of a lost pwd, plus the user option to kill everything in case of a bad actor.

TC was already angry at that time.
Now he’s really angry.
Guess why?

Dirk PraetMarch 5, 2016 7:41 AM

@ Sancho_P

There is only a small (?) issue with Tails, it doesn’t properly run from disk/stick on some modern desktops without doing pixie magic.

There's quite some known issues with TAILS. Check here.

GurkaMarch 5, 2016 11:51 AM

@Sancho_P: Your analogy is strange. Encryption cannot be compared with front doors for the very reason that a front door can be replaced. But a replaced algorithm, replaced input interface, or whatever, cannot decrypt.

So, a part from cryptoanalysis and brute force, it's simply no way to "unlock" encryption if the encryption were done correctly. So, the weakness must have been in place beforehand.

So, yes. There was a very distinct vulnerability beforehand. Apple uses some security of obscurity or whatever one wants to call it, and now it is in the open. To obtain Apples knowledge and Apple’s intellectual and physical property is therefore a problem not related to encryption theory.

Its actually self-evident. If Apple can, in theory, unlock a phone, this imply that the access is not protected by mathematics but by something else. The enemy knows the system and obtaining this knowledge is a matter of reverse engineering, debugging, hacking, coercion and so on.

So, good on you Apple. The debate is long over, there already is a backdoor in place. This time FBI wants it, and is lazy enough not to implement it themselves in silence. Next time, another agency, another nation state, exploits this in silence independent of the outcome of this case. And not before long, the exploit is in criminal hands.

Why are we debating the implications of this case when there simply is one? The problem is serious enough and security folks must prevail in hindering agencies and nation states to enforce backdoors. But also, we need to expose companies that use halfwit security that can be exploited.

Again, there is simply no way unlocking could be done after the fact, apart from brute force, cryptoanalysis or malware on a device still in the users hands. Therefore, the weakness were already there. You cannot "unlock" my hard drive by writing a special version of LUKS as long as I don't provide with the passphrase _after_ the "upgrade".

Dirk PraetMarch 5, 2016 1:16 PM

@ Gurka

You cannot "unlock" my hard drive by writing a special version of LUKS as long as I don't provide with the passphrase _after_ the "upgrade".

OK, let's try this again. One can easily unlock your LUKS-protected hard drive if your passphrase is "1234" and the attacker can execute an automated brute-force utility that doesn't wipe your drive after 10 failed attempts.

Which is exactly why Apple put these additional security mechanisms in place, including the passphrase for your drive not being limited to an in general weak password you entered yourself, but a combination of that passphrase and another secret key stored in the phone. What the FBI wants is for Apple to write a custom version of iOS lifting those additional security mechanisms and that can be loaded on the phone, then hooked up to a GPU-cluster or series of Cray machines to eventually brute force the passphrase combination.

If the owner of the locked phone, however, has used a sufficiently large and complex password, then even with the crippled OS chances are that it may still take them years, if not decades, to crack.

In conclusion: there is no weakness or hole in the Apple encryption scheme itself. It's actually stronger than LUKS or anything else in that your own passphrase is complemented by an additional secret to make up a combined key.
The only weakness here is in allowing a modified OS to be loaded on the phone when it's locked. Get the idea?

Sancho_PMarch 5, 2016 6:13 PM

@Dirk Praet

Thanks, but not the black screen (like switchable graphics in BIOS).
Booting from DVD is OK, just the HDMI display isn’t recognized and resolution set to 1024 x 768 (4:3), can’t be changed, but the display is HD (16:9).
Other Linux' (e.g. Solus) happily reports and sets the correct display on DVD start.

Sancho_PMarch 5, 2016 6:21 PM

@Dirk Praet

”The only weakness here is in allowing a modified OS to be loaded on the phone when it's locked.”

Um, yet the fact “locked” does only matter in this particular case, because Farook, the user, is dead. I wouldn’t stress that fact.

Imagine the special SW was built by Apple:

The FBI (court) would demand a targeted update for “12341234” (phone number, IMEI), the next time the user unlocks the device - bang.

Or the IRS would demand a targeted update for Mr. Donald Pony,
to copy all his data directly to them.
Tomorrow Donald would unlock the phone, hurray.

The update could be a global security update, you and I would download it as it is recommended (or mandatory) for all users.
But it would include a special part to load some ad-ons only to “criminals” or alike (e.g. Donald), selected by the identifier of that phone.

Next week, with your very consent, you’d download the demanded backdoor for your gov, the universal key for LEOs, or one for each “Security Organisation”, to activate their “lawful” access.

@Skeptical's wet dream.
Good night.

Dirk PraetMarch 6, 2016 11:11 AM

@ Sancho_P

Next week, with your very consent, you’d download the demanded backdoor for your gov, the universal key for LEOs, or one for each “Security Organisation”, to activate their “lawful” access.

That would indeed be a "game over" scenario, and exactly what Apple is trying to avoid. But even if the USG eventually loses, the question remains what they will do with other nations like China and the UK under whose legislation such requests would be entirely lawful (at least if Theresa May manages to get the new Snooper's Charter through). France and other countries are heading in the same direction. Either they'll have to come up with a device that makes any such requests impossible by design, or there's going to be different versions of iOS for different countries lest they face serious fines, country manager jailtime or even sales prohibitions.

ianfMarch 6, 2016 12:42 PM


@ Dirk Praet “Either Apple will have to come up with a device that makes any such state-ordered decryption requests impossible by design, or there's going to be different versions of iOS for different countries…

They may adopt a two-tier approach: the default iOS is protected-but-decryptable-upon-court-order, BUT then the end user is given a choice to harden it additionally in such a fashion that all Apple ever sees are encrypted same-size salted/ padded snippets; owner is prevented from reusing (even partial/ similar) passwords for its various functions; and no passwords EVER can be recovered by Apple. Moreover, iOS could have an option that specified CRITICAL functions (like bank payments, say, iMessaging, or access to iCloud data) be executable solely from a predefined narrow geo-fence and/or IP-range.

The litmus test for that will be when they'll start encrypting incremental iCloud backups. In the end the gluttonous FBI etc may end up with far less that they have now…

Dirk PraetMarch 6, 2016 2:07 PM

@ ianf

They may adopt a two-tier approach...

Sounds entirely plausible to me. After all, isn't Microsoft doing the same with Windows 10 now? Pro and Enterprise users would have the option to disable automatic updates and tick off most of the data the operating system wants to send back to Redmond. Home users on the other hand are forced to download automatic updates, whereas the OS also controls how much bandwidth a user consumes, displays ads in the Start Menu, logs every key press on the keyboard, downloads a user's browser history, and so on and so forth.

BoppingAroundMarch 6, 2016 4:11 PM

Dirk Praet,
Did they give the postpone functionality to the Pro edition recently? I thought Enterprise was the only one that's remotely useable.

Off-topic, I'm experimenting with Win10 in a VM. Having configured the built-in firewall not to allow anything out but a small list of programs, it (the OS) has so far been awfully compliant besides regularly firing off DNS queries about various *.microsoft.com domains, although I admit I haven't tested it properly (running Wireshark in the same VM isn't proper testing). It does also continue to run Cortana in the background (according to the Task Manager) no matter what the Group Policy says that Cortana is disabled.

Dirk PraetMarch 6, 2016 4:44 PM

@ BoppingAround

Off-topic, I'm experimenting with Win10 in a VM.

I recommend using one of the many free utilities to stop Windows 10 spying. Check for a comparison here

It does also continue to run Cortana in the background (according to the Task Manager) no matter what the Group Policy says that Cortana is disabled.

You can permanently disable Cortana as explained here

Sancho_PMarch 6, 2016 5:47 PM

@Dirk Praet, ianf

I guess a “different (i)OS for different countries” or “end user choice [1] ” or “elite and plebs version” wouldn’t be viable business models when we think of tomorrow [2].

Any nation, country or (e.g. US) state, all will have a different legislation.
And they will change that law over time, force and back (the latter is unlikely, though).
E.g a mobile device may be lawfully sold in New Jersey but not in NY, see:
https://nakedsecurity.sophos.com/2016/01/15/new-york-tries-to-force-phone-makers-to-put-in-encryption-backdoor/

Despite that crazy proposal (which wouldn’t help in case the device was bought in New Jersey but used by the pimp in NY) there’s a solution in sight only when we turn it around:

The device will obey the law. All lawful devices would have the same OS.
Other devices would be considered contraband.

The device would store the actual PIN and location at the manufacturer (similar to your SIM card).
In case there is a pending warrant + the device is within the applicable area + the “lawful service” fee is paid, the manufacturer will hand over whatever the warrant asks for (PIN in the clear, data if applicable).

In the unlikely case your country doesn’t have such legislation nothing will harm you.

Otherwise:
- Extra business for the manufacturer, money comes from the tax payer.
- US could access all spy phones used in the US.
- [cough], China could access all US “embassy” phones …
- Sure there is a “don’t touch” flag in the database for the elites (Congress, …).

There are other consequences of such devices I don’t want to think of at the moment (leaving now for a barrel of red wine).


[1] Funny idea in the light of National+Cyber+Security.
[2] And I wouldn’t reduce the scope to mobile devices only.

Dirk PraetMarch 6, 2016 7:58 PM

@ Sancho_P

The device would store the actual PIN and location at the manufacturer (similar to your SIM card).

Are you sure you wrote this BEFORE that barrel of red wine, or was it rather AFTER?

K-VeikkoMarch 10, 2016 2:06 AM

The solution is changing the keys on regular basis by corrupting the old keys by publishing them. – What a business: If you don't buy regularly the updated keys all of your information is wide open for the world.

If the constitution and open justice was followed the Apple's response should be made public to the detail. If they give keys to the state those same keys should be published in the court protocoll.

The state can not demand such information from a citizen that couldn't be demanded by any other citizen.

GurkaMarch 16, 2016 5:36 PM

@Dirk Praet: Yes, I perfectly understands the technological description, but yes, there is a weakness. Exactly were I said: instead of a proper key strengthening algorithm Apple used dummy delays, busy waits. This is the exactly kind of thing that enables brute force on a much stronger machine. If a, say, one second long delay had to be scaled to, say, a million times faster computer, there will just be a million tries per second. But, if the algorithm uses meaningless delays, the brute force scales much better.

So, this is not the way you do it, ergo, Apple failed. And no, you don't gain any strength with "second secrets" or such (its not even two- or multifactor authorization).

The whole point is that scaling should be somewhat linear. If it ain't so, the method is bogus.

Yes, the "erase device after N failed attempts" is good security, but it's never designed for physical access. If my screen save lock where the "real" security, it also would fail in the same way. Therefore, it's implemented wrong. The screen lock should only prevent login, and the encryption should protect the data (after a reboot).

As I said before, I don't know what Apple designed and how this encryption is related to the daily use, but I suppose that a device asks for a real passphrase on boot, and use a simple screenlock "1234 is then fine" as "screensaver".

Obviously a "1234" style passphrase will be brute-forced in milliseconds, but that is not the point. That is just user error.

I'm not the only one to point out that iphone security has this kind of flaws.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.