Possible Government Demand for WhatsApp Backdoor

The New York Times is reporting that WhatsApp, and its parent company Facebook, may be headed to court over encrypted chat data that the FBI can’t decrypt.

This case is fundamentally different from the Apple iPhone case. In that case, the FBI is demanding that Apple create a hacking tool to exploit an already existing vulnerability in the iPhone 5c, because they want to get at stored data on a phone that they have in their possession. In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability—one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.

And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.

We live in scary times when our governments want us to reduce our own security.

EDITED TO ADD (3/15): More commentary.

Posted on March 15, 2016 at 6:17 AM231 Comments

Comments

keiner March 15, 2016 7:52 AM

The most scary thing is that the author recognizes the nature of the surveillance state so so late…

Thoth March 15, 2016 7:56 AM

@all
Simply stop using WhatsApp or any Closed Source variants and switch to an Open Source alternative like Signal where you have more personal control over the codes you are running.

vwm March 15, 2016 8:02 AM

@Rolf: No one doubts, that it’s technically possible for WhatsApp et al. to cripple, disable or downgrade their security features. Most of us think that it would be a rather stupid move, as it puts the security of regular citizens at jeopardy. Most criminals will simply employ some other end-to-end encryption tool (and double check the keys), thwarting your monkey-in-the-middle. — Regards, Vincent

Mike Gerwitz March 15, 2016 8:06 AM

I’m under the assumption that users connect to WhatsApp servers—or maybe even proxy through them—to provide a more pleasant experience for users. Please correct me if I’m wrong.

In that case, government demands like these are only possible because WhatsApp is in control of those connections; if users switch to distributed services where they can choose their “provider”, or even connect to one-another directly, then such requests for a wire-tap aren’t possible—the government would have to contact individual providers or users with a warrant/subpoena, which is far less of a reach.

Of course, if the Apple case is won, that’s a precedent for the government to force developers to insert backdoors into their software, which would make this attack possible again. Users should use free software so that anyone can audit the code and review patches. If there is question about the integrity of the project, it can be forked and development continued under a different entity.

With all that said, we know that various government agencies are engaged in the active exploitation of users’ computers; that won’t stop them from trying to install a keylogger. But let’s not use that as an excuse.

Rolf Weber March 15, 2016 8:21 AM

@vwm

“No one doubts”? Schneier said WhatsApp could only comply with creating new vulnerabilities and patching the client. But this is simply not true. The man-in-the-middle vulnerability already exists, and the client doesn’t need to be changed (because the WhatsApp client currently doesn’t alert on certificate changes, it’s not even configurationable).

Of course you are right criminals could use other messengers. But that’s intended. They simply should not be able to hide in the big mass.

Philip Collier March 15, 2016 8:25 AM

Compliance with such demands may be effective against persons acting on impulse, or maybe amateurs. Hardened criminals and terrorists will think ahead find strong third party cryptography.

Too bad because compliance will hurt only the innocent and also destroy trust in American providers of secure communications.

Why go through the trouble? Isn’t terrorism more rare per capita than lightning strikes?

Lisa March 15, 2016 8:25 AM

@Ralf: any system which allows Man-in-the-Middle has failures in Authentication.

For multi-party systems, security typically depends more on authentication then it does on secrecy (encryption), so what you are proposing is nothing more than snake oil or clipper chips.

At the end of the day, as a society we have to decide if we want to have trusted security products, or allow ourselves to kill off the tech industry due to mistrust that every possible computer, tablet, phone, TV, IoT device, wearable device, etc. can be leverage to spy against us just because it can in the hope of making it easier for police states to capture a very small few more criminals.

Mmm March 15, 2016 8:25 AM

From what I’ve read it’d be possible for WhatsApp to disclose the information.

In interviews with journalists WhatsApp stated that they would use Public Key Encryption, where only the sender and recipient can unencrypted content. Indeed they did, but they used the same key for every user.

While this is surprising given WhatsApp’s previous PR, it does explain the mysterious $19 billion price tag (ultimately $21.8 billion) that Facebook was willing to put on WhatsApp. In my opinion, fully encrypting all WhatsApp content would make WhatsApp a near worthless asset to Facebook, especially considering the repeal of the $0.99 a year subscription model.

https://technical.ly/brooklyn/2015/11/20/whatsapp-not-really-encrypting-messages/

Apparently WhatsApp are now encrypting on iOS and not just Android but, for the reasons given above, I do not trust it.

WhatsApp has added Axolotl encryption in WhatsApp for iOS. You can see in the server properties that is already enabled, WhatsApp is also planning to encrypt group conversations and images…

https://github.com/mgp25/Chat-API/wiki/WhatsApp-incoming-updates

Sasparilla March 15, 2016 8:43 AM

@Wm – “Skype Co-Founder’s Wire app gets end-to-end encryption”

I’d view anyone’s work that had former executive connections to Skype as extremely suspect – no matter what country they setup in. Remember Skype execs had chosen to give the NSA backdoors to its customers communications well prior to being sold to Microsoft.

As to the complaint about Signal accessing your contact list, that’s so its easy to add the person you want to communicate with…the list isn’t relayed out of the phone or directly imported into Signal (you only pick contacts to add to Signal). The endorsements of Signal by well respected folks within the encryption and privacy interests (including I believe Bruce here) also carries alot of weight. I’m sure the govt and its minions hate Signal & I would not be surprised to find out that they have people online actively trying to steer people away from it.

I’d trust the open source Signal app far more than former Skype execs new creations. Fool me once (Skype gives access to NSA) shame on you, fool me twice shame on me. JMHO….

gerald March 15, 2016 8:46 AM

@Thoth – eventually it will be illegal to make, use or sell any software that performs as you say. Like guns… you can’t own an automatic weapon, it’s still illegal if you make it yourself.

Stephan16498 March 15, 2016 8:46 AM

WhatsApp, end-to-end encrypted, who says so? Only Moxie Marlinspike claimed so, and with that single post in his blog as sole source of information, everybody on the planet repeated it without questioning. Instead, there is yet to be found one single public statement where WhatsApp themselves would confirm they provide encryption. They never confirmed nor denied it (sounds familiar? 😉

Even funnier is when WhatApp introduced WhatsApp Web (where you can find your conversations in a desktop browser). End-to-end encrypted mobile-to-mobile, and yet there’s a side track to show messages in a browser? Funny nobody challenged the “end-to-end” assumption. How can the webserver-in-the-middle display a copy of the messages? Even if the browser would perform decryption via Javascript (a la Protonmail), who manages the private keys?

Third point – Remember WhatsApp was acquired by Facebook, and you think Facebook would allow end-to-end encryption, lose visibility into the messages, and forfeit the gold mine of personal data that they paid $19 billion for?

I completely challenge the unsubstantiated assumption that WhatsApp is end-to-end encrypted. Moxie Marlinspike gained publicity, and WhatsApp gained a myth. But this is one big honeypot that everybody who care about their privacy should avoid. The FBI/NSA have probably secured a backdoor already through a National Security Letter with gag order, so you’ll never know. But the fact that WhatsApp never confirmed the encryption is just one big, dead canary that speaks for itself.

Check the EFF Secure Messaging Scorecard at https://www.eff.org/secure-messaging-scorecard – WhatsApp is one of the worst rated applications. I have utmost, absolute respect for Bruce Schneier (man you’re a hero for the planet) but this is one time you should check your sources.

Rolf Weber March 15, 2016 8:53 AM

@Lisa

No, I “demand” nothing new. You may argue WhatsApp lacks proper authentication, but this “vulnerability” already exists!
The government just wants to exploit this EXISTING VULNERABILITY.

WhatsCrap March 15, 2016 9:07 AM

EFF thinks WhatsApp is simply crap. What’s that you say? Well:

https://www.eff.org/who-has-your-back-government-data-requests-2015#whatsapp-report

<

blockquote>WhatsApp earns one star in this year’s Who Has Your Back report.

Industry-Accepted Best Practices. WhatsApp does not publicly require a warrant before giving content to law enforcement. WhatsApp does not publish a transparency report or a law enforcement guide.

Inform users about government data demands. WhatsApp does not promise to provide advance notice to users about government data demands.

Disclose data retention policies. WhatsApp does not publish information about its data retention policies, including retention of IP addresses and deleted content.

Pro-user public policy: oppose backdoors. [Yah! One star baby]

But wait, there’s more:

https://www.eff.org/secure-messaging-scorecard

WhatsCrap ScoreCard

Encrypted so the provider can’t read it? – NO
Can you verify contacts’ identities? – NO
Are past comms secure if your keys are stolen? – NO
Is the code open to independent review – NO
Is security design properly implemented – NO
Encrypted in transit – YES
Recent code audit – YES

So, a whopping 2/7 stars for WhatsCrap. What else would you expect from a Fraudbook company?

Get real and try instead (7/7 stars on EFF scorecard):

ChatSecure + Orbot
OTR (Pidgin)
Signal/Redphone
Silent Phone
Silent Text
Telegram (secret chats)

Thoth March 15, 2016 9:15 AM

@gerald
That is if they can enforce total communication blackout and lockdown. It would be tedious if they have to search every electronic messages or content for cryptographic steganography, paper mails for One-Time Pad encrypted contents, toys being X-rayed at every train stop …. how much effort and resources can they develop into a complete lockdown state before it wears out on it’s own ?

Side channels in different messaging channels can be used to bypass surveillance like altering clock timing, frequency of messages and so on. It’s part of the prisoner’s dilemma class of game theory.

I am not very familiar with Cuba but it is said that under such a lockdown state, the people themselves have their own found their own form of trust relationships and networks they managed to create. They have their own internal WiFi networks and they have their own way of passing messages and learning news (besides state propaganda media).

All it takes is for us to be willing to give up our own personal security and privacy and this will enhance the surveillance and lockdown state effect further.

For those who really want to create weapons, anything from a broken glass shard to chopsticks can be dangerous. How are they going to ensure your lathe machine and drills in your garage would “prevent making dangerous weapons” ? Maybe the drills themselves in your garage could be purposed as weapons too 😀 .

We know that media is going to be locked down. The next step of evoloution in messaging is Box-in-a-Box (encrypt a message using a separate utility tool and put it over a normal mesaging channel) approach with P2P or mesh networks over multicast/broadcast messaging system that are resilient to being taken down via single points of failures. They should also be portable to be capable of being loaded into embedded systems.

Vesselin Bontchev March 15, 2016 9:32 AM

@Wm Wire is an intrusive, badly designed piece of crap. See my initial thoughts on it:

http://pastebin.com/XHtasYbb

Although they have released the source of some parts (like the crypto), the whole project isn’t open source (at least not yet), so you don’t really know what is happening inside. The fact that it relies on a centralized server with no ability to detect MitM attacks (unlike Signal) is what bothers me the most.

For a secure P2P open source messenger (supporting voice and video), take a look at qTox. Still far from a polished product, but I liked it better than this Wire crap.

Dr. I. Needtob Athe March 15, 2016 9:58 AM

It’s good to see corporations protecting the people from the excesses of government, but I’d much rather see government protecting the people from the excesses of corporations.

65535 March 15, 2016 10:01 AM

It appears that governments [including the US Government] have grown too comfortable with spying on their citizens’ electronic conversation with ease and are now having “Spying Withdraw” symptoms like a Heroin Addict.

If governments cannot stand the thought of their citizens communicating in private there is something wrong with the “Government.”

In the 1770’s through the Civil War and through most of the 1800’s and part of the 1900’s the “Government” got along well even though it could not electronically spy in its citizens. There was law enforcement which functioned without such spying.

Maybe it is time to realize the problem may not be with secure electronic communication and privacy – the problem maybe the “Government” as we know it in 2010’s. It may be a time to shake up the “Government” and leave the citizen’s electronic communication alone.

Law enforcement my have to revert to actually doing real investigations and dull slogging foot work to achieve their job(s). In short, the “Government” should say out of the citizen’s private conversations!

Lastly, we don’t need any more multi-decade wars such as the never ending “War on Terror” and the like.

Clive Robinson March 15, 2016 10:27 AM

@ Rolf Weber,

… criminals could use other messengers. But that’s intended. They simply should not be able to hide in the big mass.

Sometimes I think people realy do not get it, politicos and plods, OK they are not comms or security people,but the regular posters here I would expect to be either more knowledgeable or insightful…

Which means I could easily use any messenger app of my choice no matter how backdoored by the FBI, CIA, NSA, MI5, MI6, GCHQ, et al and they would not get the real plaintext of my message.

Which further means they contrary to what you say they could quite happily “hide in the big mass” with little difficulty as well as having message content security.

Skeptical March 15, 2016 10:41 AM

This seems to be a sharp internal contradiction:

And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.

Unless I’m misreading it, the second sentence flatly contradicts the first. Other countries already demand it.

And this highlights the larger point: Neither China nor Russia nor any other country is waiting upon the US to determine whether they will impose requirements on communication devices used inside their own countries.

The right to privacy is deeply important, but our political, legal, and social system is constructed in such a manner that we balance that right to privacy against other values.

The insistence that I see from some quarters that it’s all or nothing – that either something is secure against search warrants as well as thieves or it’s not secure at all – seems more driven by emotion than by a rational deduction from the state of technology.

If a thief needs to obtain both the compliance of a company and the assets and position of a nation-state to break into my flat, and will otherwise be unlikely to be able to do so before being detected and thwarted, then I’d say my flat is pretty secure. It’s not as secure as the castle on my private island in the Arctic Circle that I repair to for some quiet time away from it all – where I have a moat filled with healthy polar bears, multi factor authentication, and am always behind forty-two proxy doors – but it’s reasonably good enough.

I think it would be more constructive if companies focused upon reasonable security, and making reasonable promises –

e.g. “listen, we’ll protect you as best we can from criminals, and we won’t look at your data ourselves, and we’ll submit to independent audits to prove as much to you – in fact, we encourage the passage of laws mandating or incenting minimum levels of security, either via impositions of liability for breaches or via regulatory agency or both – but we’re also going to cooperate with the government when, in good faith, we believe that we are ethically or legally obliged to do so. If that’s not good enough for you, we understand, but the very fact that we’re willing to lose your business in order to behave responsibly should be an indication that we’re more trustworthy than most.”

That, frankly, is a reasonable and honest stance. What Apple is currently doing – what others are attempting to do – is not.

And I am far from the only one who thinks so.

gerald March 15, 2016 11:00 AM

@Thoth – good points, but you’re describing something else entirely. There are other systems, but the problem is they are difficult to implement. Somebody has to actually code it up, make it work. That’s the hard part, it’s not the idea of it.

@Mike Gerwitz – it’s a double-edged sword. With a central server you can call up the provider “X” and say “I lost my thingy” and they can deactivate it. An obvious alternative with no central anything is, say, an encrypted flash drive. But then you are at the mercy of the hardware design because your stuff is actually on the device. And if they break in you will never know it. Many here are referring to yet another scheme.

gerald March 15, 2016 11:15 AM

Adam Segal author of “Hacked World Order” write this column in the LA Times

http://www.latimes.com/business/technology/la-fi-0315-the-download-encryption-20160315-story.html

It’s a familiar thesis, but the second proposal struck me:

“Second, the executive branch should explore developing a national capacity to decrypt data for law enforcement. The challenge of going dark affects state and local law enforcement the most: They are the least likely to have the resources and technical capabilities to decrypt data relevant to investigations. Creating a national decryption capability, housed within the FBI and drawing upon the expertise of the National Security Agency, would provide assistance to state and local law enforcement, similar to what the FBI provides for fingerprint and biometric data. ”

Is this already taking place? Can you spell ‘unlimited budget’?

Response March 15, 2016 11:16 AM

@Stephan16498

How do you explain this screenshot confirming (on Android) that end-to-end encryption is in use?

https://i.imgur.com/ZDRhmkN.jpg

Apart from Open Whisper Systems confirming that they provide the backbone technology for WhatsApp on Android (link below) there is are several articles online from reputable sources (like ArsTechnica) although I don’t know where they’ve sourced their stories from.

http://arstechnica.com/security/2014/11/whatsapp-brings-strong-end-to-end-crypto-to-the-masses/

http://arstechnica.com/tech-policy/2016/03/encrypted-whatsapp-voice-calls-frustrate-new-court-ordered-wiretap/

https://whispersystems.org/blog/whatsapp/

Rolf Weber March 15, 2016 11:56 AM

@Clive Robinson

You simply don’t get the point. If “backdoored”, criminals cannot simply take advantage of and solely rely on WhatsApp’s encryption. They need to add their own layer.

@Skeptical

Great comment! And no, you are not the only one.

Rolf Weber March 15, 2016 12:18 PM

What is most troubling is that techies again lie about technical facts. Like here Bruce Schneier. He claims that WhatsApp had to implement new vulnerabilities and to update the client. That is simply bullshit, and Bruce knows it. But he claims it anyway, because it fits into his ideology.

And nobody of the “experts” here objects this obvious technical nonsense. The tech community is a sect.

albert March 15, 2016 12:24 PM

Remember,

‘Fighting terrorism’, while important, isn’t the only reason for mass surveillance(MS). As states become more draconian in their oppression, MS becomes a critical tool in population control(PC).

We’re experiencing PC now in the form of Mass Media Propaganda(MMP). It has been relatively effective so far, but weaknesses are developing in the system. Trump/Sanders is but one example. The Black Lives Matter(BLM) movement is another. Then there’s Snowden and Wikileaks.

The endgame comes when ‘terrorism’ is no longer required as a reason for State Surveillance(SS).

All hail the Fearless Leader of the Democratic Peoples Republic of America…(No, I don’t have a Dr.Strangelove arm reflex)…

Hilary R!!

. .. . .. — ….

gerald March 15, 2016 12:33 PM

@Mike Gerwitz – you probably saw this:

http://www.wired.com/2016/03/fbi-crypto-war-apps/

See especially Matthew Green’s comment:

“If you’re developing a messaging system that relies on a centralized, trusted key server, now is the time to rethink that design.”

But the reason I post the link is because app developers need to understand what has to happen going forward. You can’t just code up a UI and then plug in some recognized encryption algorithm the way umpteen others have done. The threat in the Wired article is that authorities could try to force you to rewrite it to suit them.

That means even open-sourced code will be of limited value. Who is going to examine every line of code to see for themselves everything is OK? Will you recognize a problem when you see it? It’s not as simple as scanning for hard-coded keys/passwords. And even if the code is signed and you match the signature, you have to build it yourself on the appropriate platform, only to examine the code again every time there’s an update.

Martin March 15, 2016 12:50 PM

What the hell is wrong with folks thinking it is OK to have private conversations. It is so plainly obvious the government has a right, no an obligation, to listen in on everything we say to everybody. The government is made up of really special people, and hence, deserve really special privileges. Please stop making the job of our government employees so difficult and let them listen to and see everything that we’re doing. OK?

z March 15, 2016 12:57 PM

Am I the only one who is still completely stunned that the government would pursue backdoors so publicly again? I always expected that they would attempt it surreptitiously, as the NSA has done, but I never thought they would be so brazen as to just demand it openly and draft laws that require it.

I think a lot of us thought the crypto wars of the 90’s were so utterly embarrassing for the government, with the Clipper chip fiasco, export grade crypto failing miserably, etc., that there’s no way they would ever be dumb enough to try this BS again.

No, no, no, no March 15, 2016 12:58 PM

@Rolf Weber

You couldn’t be more wrong when you say:

“What is most troubling is that techies again lie about technical facts. Like here Bruce Schneier. He claims that WhatsApp had to implement new vulnerabilities and to update the client. That is simply bullshit, and Bruce knows it. But he claims it anyway, because it fits into his ideology.”

What Bruce actually said was:

“This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability — one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications.”

If we accept that WhatsApp are encrypting messages (and several sources report that this is happening on the Android platform) then Bruce is absolutely correct: what is being asked for here is that WhatsApp undermine their end-to-end encryption in order to help the government.

I refer you to this screenshot which confirms end-to-end encryption is in in place:

https://i.imgur.com/ZDRhmkN.jpg

Instead of calling out well-respected members of the technology/cryptography community by accusing them of pedalling “bullshit” you should get your own facts straight first and then back them up with evidence.

Mike Gerwitz March 15, 2016 1:15 PM

@gerald I hadn’t seen that link; thank you. I have a huge reading backlog from all these events.

That means even open-sourced code will be of limited value. Who is going to examine every line of code to see for themselves everything is OK? Will you recognize a problem when you see it? It’s not as simple as scanning for hard-coded keys/passwords.

Free software philosophy aside: you still have a better chance of finding security issues (which can be more dangerous than blatant backdoors) and backdoors if you can examine the source code than if you download proprietary binaries. I should note that reproducible builds are also a necessity here.

And even if the code is signed and you match the signature, you have to build it yourself on the appropriate platform, only to examine the code again every time there’s an update.

Ultimately you’ll have to place your trust in an organization that distributes the software—most users don’t know how to audit the code, and others that do (like myself) don’t have the time to do that; it’s a huge task. The Tor project is a good example of this; with their wide user/hacker base, and with the freedom in which researchers are able to research and exploit the protocol, we are much better off than we are in a situation like we have with WhatsApp (from a software perspective).

You’re never going to have guaranteed security, especially in networked systems. But you should do whatever you can to mitigate threats. If users don’t find the situation with WhatsApp to be a threat, I may disagree, but that’s for them to decide.

Daniel March 15, 2016 1:35 PM

Skeptical writes, “The right to privacy is deeply important, but our political, legal, and social system is constructed in such a manner that we balance that right to privacy against other values.” .

I want to call out this statement because it is one of those beautiful examples of the dark art of political framing. It is a statement that is true but misleading.

What Skeptical is arguing is what I have come to call the “rights balancing” argument that is inherent in the 4A. And it’s true. The 4A does balance rights. Crucially, however, not everything in our “political, legal, and social system” is based upon rights balancing. For example, the third amendment to the US Constitutions is an example of an absolute prohibition on executive activity.

So before we can answer the question: how do we balance rights in the Apple or Facebook cases, the government needs to justify why there should be any balancing test at all. In other words, it is not self-evident that “rights balancing” is the proper frame of reference for resolving debates about encryption. There is precedent for resolving such questions on absolutist grounds.

Rolf Weber March 15, 2016 2:03 PM

@No, no, no, no

Maybe you don’t know, but Bruce Schneier knows for sure that end-to-end encryption can be easily broken if an eavesdropper sits in the middle and nobody checks authentication. Both is given regarding WhatsApp.

No, no, no, no March 15, 2016 2:24 PM

@Rolf Weber

For an eavesdropper to sit in the middle they’d need to break the SSL encryption first and then the end-to-encryption. Depending upon how the E2E encryption is implemented, e.g. PFS, there may be difficulties decrypting historical messages.

WhatsApp say:

“WhatsApp communication between your phone and our server is encrypted.”
https://www.whatsapp.com/faq/en/general/21864047

We’ve already seen that on Android E2E encryption is definitely being used and on iOS it’s being rolled-out.

Therefore the only people who are realistically in a position to conduct such an attack are nation states. As Bruce implies it may be too late to help decrypt historical communications (or possibly not depending upon their implementation) but what the FBI are pushing for is the ability to access future communications.

This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability — one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications.

I fail to see how you can claim that Bruce is talking “bullshit” when it seems that the only way of getting access to future communications is to implement new vulnerabilities and update the client.

He claims that WhatsApp had to implement new vulnerabilities and to update the client. That is simply bullshit, and Bruce knows it. But he claims it anyway, because it fits into his ideology.

Whether people check authentication or not is not relevant to your argument because we’re talking about accessing future communications.

Assuming the parties do check authentication then the FBI would still need to bypass the SSL and then potentially interfere with WhatsApp’s PKI before the messages could be decrypted. To do that probably requires introducing new vulnerabilities which is what Bruce is saying.

Rolf Weber March 15, 2016 2:31 PM

@No

You seem to lack understanding technical basics. Only future communications can be decrypted because a man-in-the-middle attack can only be performed on ongoing communications, not stored ones. Please do some homework before you continue this discussion.

OldFish March 15, 2016 2:36 PM

Reasonable security: effective regardless of the opponent’s identity, motives, legal basis, or budget.

BJP March 15, 2016 2:51 PM

@No, no, no, no

We’ve already seen that on Android E2E encryption is definitely being used

“Definitely”. Based on… the screenshot up above that claims so?

Further, your apparent belief that a man in the middle attack requires breaking encryption when clients do not implement certificate pinning shows a need to re-check your prior assumption.

No, no, no, no March 15, 2016 3:12 PM

@Rolf Weber

I know that a MITM would only benefit future communications. That’s why I highlighted in bold “future communications” in my post.

The reason I put “or possibly not depending upon their implementation” [accessing historical communications] is because it seems that WhatsApp use the same key for all of their communications. If that’s true then they’re not using PFS; if it’s false then they wouldn’t be able to easily access historical communications (unless it was badly implemented) but may, subject to what Bruce is saying, be able to access future communications.

I don’t need to “do my homework” as I understand the issues. You’re calling a world leading expert (Bruce) a ‘bullshitter’ which I don’t believe for one second.

@BJP

I’m not just basing my assumption on the screenshot posted but also on the writings of Moxie Marlinspike and other technical information out there in the public domain.

Back in early 2014 WhatsApp didn’t provide certificate pinning however they said:

The WhatsApp team told us they are actively working on adding SSL pinning to their clients and we no longer find evidence of export ciphers, null ciphers, or SSLv2 support.
https://www.praetorian.com/blog/whats-up-with-whatsapps-security-facebook-ssl-vulnerabilities

Rolf Weber March 15, 2016 3:38 PM

@No, no, no, no

If you “understand the issues”, then you should know that WhatsApp can successfully MITM all of its clients, regardless whether they use the same key for all or not, and regardless whether they use PFS or not. Got it?

WhiskersInMenlo March 15, 2016 3:54 PM

Slightly different in one technical aspect.

This attack on WhatsApp does not compromise the platform itself which is
still secure enough to conduct commerce and still secure enough
to connect to some banking and broker features. And the platform
is still able to reliably update itself.

It is insidious as the arrest in Brazil indicates.

The issues of industrial and international espionage
worry me at both extremes.

Time to load what WhatsApp and brush up
on my morse code.

Nikita Osipov March 15, 2016 4:25 PM

Actually, there are multiple issues with software to be concerned about:

  1. Even if the software in question is open source, 99% of people won’t compile it from source code and just use the provided binary from the application store on their platform. Of course, one can argue that inside a particular organization some tech guys can compile and distribute it for everyone. Seriously, that’s not the case with 99% of organizations who even don’t have a tech guy or don’t really understand the security threat.
  2. The most secure app can have a very simple mechanism to just forward plain text messages somewhere else and it will make all implemented end-to-end encryption obsolete. It’s that easy. Don’t need any sophisticated MiTM attacks. Just a very simple block of code to send the decrypted data somewhere else. 99.999% won’t run a sniffing software all time to monitor outgoing traffic or implement a very strict firewall. The later will make life of employees miserable and won’t help to get work done.

As mentioned in the article published by Wire (referenced above), one should not confuse law and technology. Leave the law to law guys and tech to tech guys. These two things cannot be mixed in a fight. See any discussion and the above articles on asking to implement a backdoor one way or another. It’s all about being compliment with the law. No one is asking to change the algorithm in some weird way making it prone to attacks.

So? What’s the point you may think of all of this?

It’s actually quite easy. The history repeats itself. It all started back in the day when some guys installed their own chat services and used available open source software to run using those service but it was not convenient and cumbersome to explain to end users. Then apps came in with a better UI/UX. What’s next? Yes, right, you guessed it. Back to services and software. What does it mean?

I truly believe that companies that can and/or will provide a PaaS solution with open source frameworks to access their services across different platform, will actually be a way out of this mess.

Key points are:

  1. No one will rely on a single piece of software. As the specs are publicly available, everyone can implement their own client software or use it from any compatible third-party vendor. It won’t be possible to intercept the data on the server side as it’s the same case nowadays with end-to-end encryption.

  2. MiTM attacks still posses a threat but it can be mitigated by the provided frameworks that will insist the end user check the really identity of his/her contact by not allowing to skip the identity verification step if the key’s fingerprint changed or during the beginning of a conversation. It’s not about the UI, it’s about a very important concept that you need to build security first and then all the rest. The app should not allow to skip this step. The identity verification (public key fingerprints comparison) should not depend on “Accept/Deny” choice by the user. There are things like SMP (Socialist Millionaire Protocol) that requires you to actually input data to verify it. It mitigates the risk of users tapping “Next”. Without the actual numbers/words they receive from their contacts, they won’t be able to pass the identity verification stage.

  3. Decentralized architecture but not in the way when there is no central server but when the server and client software come from different sources so if something changes, everyone will notice and there is no single point of failure as the available software should comply with the described protocol. However, the law problem still exists when there is an order to modify the code of the used client software to send decrypted messages somewhere else. But as it was mentioned earlier, law and tech do not come hand in hand.

No, no, no, no March 15, 2016 5:03 PM

@Rolf Weber

You’re totally missing the point of the blog post.

It seems the FBI are asking WhatsApp to introduce a (new) vulnerability* that would allow them to decrypt data without requiring WhatsApp compromise all of its users.

*i.e. it doesn’t presently exist yet you suggest Bruce is being disingenuous about this

Any vulnerability is bad because it can be exploited and should be resisted at all costs.

Rolf Weber March 15, 2016 5:09 PM

@Nikita Osipov

You talk about how it could be, but the reality is how it is, and this means that currently WhatsApp is able to decrypt messages, and thus can be compelled to do so. Period.

Of course they could do it “more secure”. Let it be undecided if really “government proof” or not. But this “extra security” comes with a price called user experience and convenience. I doubt big players like WhatsApp & Co. will pay this price. But good luck convincing them.

Clive Robinson March 15, 2016 5:12 PM

@ Rolf Weber,

… criminals cannot simply take advantage of and solely rely on WhatsApp’s encryption.

Having been called out on your error you now change your tune by adding “solely”.

Not exactly intellectualy honest of you.

The important point is “whatsapp” is a “convenience of communications application” with maybe some added security it is not a “security application”.

Which is fine if you are aware of the issues. However most “law abiding citizens” will not be aware, but your more experienced criminals will be. The advantage of the “added security” is it makes life harder for the authorities because all traffic will be encrypted, so those wishing to communicate securely will not stand out statisticaly as they used to do when the bulk of traffic was not encrypted.

However whilst some criminals will add a layer of enciphering the realy smart ones will use some form of “deniable code”. Which means that they can quite happily use WhatsApp and hide away in the bulk of other trafic.

Why go to the lengths of using a code rather than a cipher, because there is a risk the likes of the FBI will get their way, and thus you can not be sure if the communications security in WhatsApp will be stripped off via a backdoor or not. If it is striped off enciphered comms will once again stand out statisticaly but a good code will not.

As some are no doubt thinking, it’s possibly time for applications to take a step backwards in time and use some of the ideas that made *nix CLI so usefull.

There is no reason why WhatsApp or any other communications app needs to be “all things to all men”. Having an “editor app” send it’s file or save output via a “pipe” or “stream” to WhatsApp that then communicates it to the distant user is preferable in many ways. Not least because it can allow you to push another encipher or encoding app between the editor app and WhatsApp or other communications app. This can be used as a higher level framework in which ciphers or codes can be easily pulled out and replaced should issues arise with them. Likewise a user can decide which editor etc they prefer.

In some respects the grapgical front user interface was a backwards step. If you look back in time you will find things like Apple’s “Pink” etc that tried to make things work. As for users building their own SuperApps from applets a simple concept would be some what similar to National Instruments LabView interface.

If just one App was a scripting language that could handle large integers –which python is supposed to do– then a user could build their own encryption library or get hold of an Open Source one from somewhere around the globe. It was partly this idea and the PerlCrypt T-Shirt which made crypto wars one winable, and a modern version could be a 2D bar code not to disimilar to the QR codes you see quite frequently in adverts these days.

But even if the likes of the FBI tried backdooring everybodies smart phone or PC keyboard –as Win10 is reputed to do– there is still the use of paper and pencil ciphers and printed code books, that with correct usage are still unbreakable.

So the FBI is on a losing wicket and know it as far as even moderately sensible criminals are concerned. Which leaves the generaly law abiding citizens as their target.

Henry March 15, 2016 5:37 PM

Privacy conscious users or criminals will choose Signal (for voice and text) or Telegram Secret Chat (for self-destructing texts).

Both apps are encrypted, both allow the fingerprint to be verified and both are reasonably secure. The benefits of Telegram for text messages are that it allows a timer to be set on each message (or a whole chat) after which it is deleted. Telegram also have a feature to destroy the account if it isn’t accessed within a user-defined period and separate PIN protection for the app.

Dan March 15, 2016 5:49 PM

If I were going to backdoor a protocol. I would store the ‘master key’ in a hardware security module. I would set the HSM to act as a decryption oracle, but have it wait 8 hours before decrypting anything else. This would work for the goverment(s) for a small volume of cases, but it wouldn’t scale well. Storing the ‘master key’ in a HSM that physically enforces a delay seems to be a reasonable compromise. The goverment(s) would be encouraged to only use the backdoor for cases that are important. (It might be a good idea to have a few backup HSMs, in case something happens)

Thoth March 15, 2016 6:03 PM

@gerald
There is the Serval Project that does part of what I describe using mesh networks to communicate off the grid linked below. The other parts have not been implemented in real world secure chats (multicast style messaging …etc…) as it is more indirect to script such a messaging system tham the poisonous server to client architecture.

Regarding checking the bulk of Open Source code, that is a good concern. A good example of Open Source bloat with crap is OpenSSL. A small core security TCB can be used and the protocol should be simple and thus easy to verify and implement as well.

I was considering on creating a box-in-a-box encryptor for Android by means of intercepting GUI from other windows and encryptimg them as a secondary standalone app but it seems that the only possible way to do so is to work on rooted phones which would not be easily accepted by most people thus hindering adoptions. Most messaging apps don’t provide a decent API or even a simple one for users and devs to create an off-application encryptor program anyway.

Other option is to create a secure entry device tied to an app with a bypass mode to send all ciphertext to that separate device for decryption and display on the device and secure message entry on the secure device while using the smartphone as a TX/RX device. The security device would likely cost quite a bit and I guess not everyone wants to carry a bulky 2nd limited function secure device just to chat securely on the go. It is assed that the secure device would have very limited memory and functionality and thus makes secure comms slower and more cumbersome.

Link: http://www.servalproject.org

Thoth March 15, 2016 6:17 PM

@Dan
What you described have already been somewhat existant in SecuSmart’s product called SecuVOICE. Essentially it is a HSM with 64 pieces of “smartcard chip” inside to support 64 simultaneous encrypted calls at the same time in a centralized location. The big problem is would you trust it if it is Government deployed for citizenry use ? Centralized communication is getting outdated due to insecurity of centralized routing points for data.

@all
Bittorrent’s closed source chat linked below is a good idea of using decentralized torrent P2P network to chat but due to it’s closed source nature of it’s implementation and claimed security, it is at best untrusted.

Link: http://www.bleep.pm/

Wael March 15, 2016 6:30 PM

@Thoth,

Other option is to create a secure entry device tied to an app with a bypass mode to send all ciphertext to that separate device for decryption and display on the device and secure message entry on the secure device while using the smartphone as a TX/RX device.

You mean something like this?

I guess not everyone wants to carry a bulky 2nd limited function secure device just to chat securely on the go.

This isn’t bulky and not too limited either, unless you were thinking of using this for an encryption device!

The security device would likely cost quite a bit

Sub $30.00, whether you use C.H.I.P or Raspberry Pi zero or similar.

@gerald,

eventually it will be illegal to make, use or sell any software that performs as you say.

Given the current state of affairs, you’re likely correct.

Stop Anal Probes by Flying Saucers! March 15, 2016 6:40 PM

See, this is the kind of conversation people have inside the US propaganda bubble.

“Brazil has arrested a Facebook employee because WhatsApp is secure.”

Falsimundo. Bullashito. Police in Brazil arrested a Facebook employee because Facebook ignored their judicial order.

http://www.pf.gov.br/agencia/noticias/2016/03/pf-cumpre-mandado-de-prisao-em-desfavor-do-representante-do-facebook-no-br

A court released him the next day, by the way. But Wisner’s wurlitzer went to town and now this is every American’s example of everything: Bruce’s example of despotic overreach, skeptical’s example of totalitarian ‘see, everybody does it.’

Now why would Facebook ignore a judicial order instead of contesting it? You cannot interpret this arrest facticule without reference to CIA destabilization efforts in Brazil, which feature Zuck himself parroting the canned slogans of the vinegar revolution. US social media is part of a vilification campaign in support of coercive US foreign interference in Brazil.

http://www.globalresearch.ca/brazils-vinegar-revolution-left-in-form-right-in-content/5346336

Smart or dumb, propaganda victims can’t reason their way out of a paper bag. This is one big reason why US subject matter experts can’t comprehend state obligations.

Andy March 15, 2016 6:46 PM

@Nikita Osipov, @Clive Robinson,

What you describe, this “end-client-agnostic general-purpose messaging framework”, actually exists and it is called XMPP.

Thoth March 15, 2016 7:16 PM

@Wael

re:nShield Connect
Lol … that is my daily bread and butter for HSM deployment. I would keep a little distance you know 😀 . Those stuff are complex and the end users would rather use plaintext messaging. Anyone using or deploying HSMs would know they are just too unfriendly for now for most normal users.

I think you meant this (Trusted Verification Device) linked below. We call it the few thousand dollars “secure calculator” 😀 . I am not privileged to expose the actual price here but what I can say is the TVD a.k.a expensive calculator is in the range of thousands of US Dollars for something so simplistic looking and that’s what I am talking about regarding portability. It is somewhat more portable but also still too cumbersome to pair with a smartphone. It is as good as holding two smartphones in one hand when messagine … good luck with dropping and cracking the phone or the handheld HSM….

What I originally referred to is the Ledger Blue personal security device (linked below). It is a touchscreen integrated with a ST31 smartcard chip and a STM32 MCU to go in between the touchscreen and the ST31 smartcard chip. This is effectively the closest we get to a “programmable HSM” of sorts. The guys at Ledger I have contacted them and they told me the source codes for the device’s OS would be placed under Open Source on Github but I have not seen any codes there yet for now. Since the device is using a ST31 smartcard chip and I have emailed them to enquire about the environment, according to their reply, it would be based around Java-like language which I would assume is JavaCard (used for programming JavaCard capable smartcards) compatible language of sorts.

I have voiced out the possible problem of holding two pieces of smartphone sized devices (in case you accidentally drop any or both of them) for which I have been assured that the device’s size is small enough that it fits well in the palm.

My original plan was to implement a very lightweight file encryption and communication protocol with it’s security critical base in the form of a Blue device applet codes just barely enough to fit into the “secure confines” of the ST31 chip on the Blue device. The laptop or smartphone would be used as TX/RX channel. The user can type into the laptop or smartphone and processed on the Blue device for lower security. For higher levels of security, a “bypass mode” would be used whereby the entry of the plaintext for conversations would be done directly on the touchscreen of the Blue device and upon receiving an encrypted message block, it would decrypt and display on the Blue device touchscreen instead of the laptop or smartphone. The Blue device is allowed to buffer encrypted caches to the attached TX/RX device when it’s memory cache is high.

Links:
https://www.thales-esecurity.com/products-and-services/products-and-services/hardware-security-modules/general-purpose-hsms/nshield-remote-administration
https://ledgerwallet.com/products/9

Wael March 15, 2016 7:25 PM

@Thoth,

re:nShield Connect
Lol … that is my daily bread and butter for HSM deployment.

I know! I can read your mind like a clear text eBook 😉 Be careful now!

Brauchbare Menschen March 15, 2016 7:34 PM

Today’s Big Lie from skeptical: “we balance that right to privacy against other values.”

Ask him what, exactly, those values are. You’ll get 60,000 words and no answer. He can’t tell you.

Here’s the right answer. From HRC General Comment 16, paragraph 4, interpretive authority for the US privacy law with which US law at all levels must conform: ‘the expression “arbitrary interference” can also extend to interference provided for under the law. The introduction of the concept of arbitrariness is intended to guarantee that even interference provided for by law should be in accordance with the provisions, aims and objectives of the Covenant [the ICCPR] and should be, in any event, reasonable in the particular circumstances.’

The provisions, aims and objectives of the Covenant. If a government bureaucrat can’t justify his privacy interference with Covenant chapter and verse, spirit and letter, he’s shitting on the supreme law of the land. He’s not qualified to be a mall cop. That goes for the fanatic Comey, head of the Opus Dei P.D. That goes for skeptical, head of Team America, Fantasy World Police. US state privacy interference must meet necessity and proportionality tests. If you’re a US bureaucrat like Comey and you do not understand that, go fuck yourself. Get some UN development training like the Africans get and then come back.

Dirk Praet March 15, 2016 8:27 PM

@ Rolf Weber, @ vwm, @ Lisa, @ No, no, no, no, @ Gerald

Schneier said WhatsApp could only comply with creating new vulnerabilities and patching the client. But this is simply not true. The man-in-the-middle vulnerability already exists, and the client doesn’t need to be changed (because the WhatsApp client currently doesn’t alert on certificate changes, it’s not even configurationable).

Unless the WhatsApp client contains a user interface that either reports on or stops dodgy key exchanges dead in their tracks, there is no need to “patch” it. Matthew Green is right in saying that “If you’re developing a messaging system that relies on a centralized, trusted key server, now is the time to rethink that design.” Almost by design, any central key server is vulnerable to a MITM-attack.

In order to use this vulnerability, someone needs to write an exploit. In this case, that would be an operator interface to inject LEA keys to decrypt both the messages sent and received by a user under scrutiny. Such an interface – which is unproven to exist today at Whatsapp – is a backdoor into the system and is exactly what the FBI is asking here. That backdoor in its turn becomes a significant vulnerability if tapped into by the wrong people. And that’s what @Bruce meant when he says that Whatsapp can only comply by creating a new vulnerability. Got it, @Rolf ?

@ Skeptical

The insistence that I see from some quarters that it’s all or nothing – that either something is secure against search warrants as well as thieves or it’s not secure at all – seems more driven by emotion than by a rational deduction from the state of technology.

Well, no, and it’s something pretty much the entire crypto community agrees upon. Either everyone is safe, or no one is. Claiming the opposite is a purely ideological stance that has no roots in reality. Over time on this blog, plenty of examples have been given of government backdoored stuff that eventually fell into the wrong hands.

we’re also going to cooperate with the government when, in good faith, we believe that we are ethically or legally obliged to do so.

It would seem that both Apple and Facebook at this time believe in good faith that they are neither ethically nor legally obliged to honour the government’s wishes in the cases currently on the table.

the very fact that we’re willing to lose your business in order to behave responsibly should be an indication that we’re more trustworthy than most.

To me, that reads like “Hello, we are AT&T. We don’t give a rat’s *ss about your privacy and security because we are and always have been government minions”.

@ All

Anyone even remotely security/privacy aware should have switched to the likes of Signal or XMPP/OTR a long time ago. For the paranoid or those up against state actors: dig into the comments of @Wael, @Clive, @Thoth, @Anura and @Nick P.

Ben March 15, 2016 8:32 PM

@ Rolf Webber

“What is most troubling is that techies again lie about technical facts. Like here Bruce Schneier. He claims that WhatsApp had to implement new vulnerabilities and to update the client. That is simply bullshit, and Bruce knows it. But he claims it anyway, because it fits into his ideology.

And nobody of the “experts” here objects this obvious technical nonsense. The tech community is a sect.”

@ Mike Gerwitz

“I’m under the assumption that users connect to WhatsApp servers—or maybe even proxy through them—to provide a more pleasant experience for users. Please correct me if I’m wrong.”

It may be more obvious for developers coming from the days of single-threaded mobile devices, such as the early blackberries. For an instant messenger to function properly in today’s mobile network, circumvent the SMS, it has to have atleast two data travel paths. Thus, the choke point isnt entirely in the hands of the messenger provider. The encryption if any has to be on the app level, so I don’t think the opinions are illogical to me.

Wake me when it's over... March 15, 2016 8:43 PM

It gets worse.

Senators Burr-R and Feinsten-D have introduced a bill to force corporations to assist the government decrypt data files under threat of civil penalties, presumably fines.

Media reports will tell you Congress has no stomach for this kind of legislation during the election year and of course is generally useless and incompetent regardless.

However, the standard gambit is to attach a law such as this to an important spending bill and it passes in the dead of night without debate almost unanimously.

Gross misgoverance that is one of the reasons “establishment” politicians are in so much disfavor now.

Apparently the Brits are working on laws even worse. I would assume the other Five Eyes countries will follow suit.

JdL March 15, 2016 9:20 PM

We live in scary times when our governments want us to reduce our own security.

In other words, we live in scary times when our government is our most dangerous enemy.

Clive Robinson March 15, 2016 10:49 PM

@ Andy,

What you describe, this “end-client-agnostic general-purpose messaging framework”, actually exists and it is called XMPP.

It’s one candidate (SIP is another) for one part of it.

I tend to talk in a “no name” way about such things because things appear to change with specific protocols every couple of blinks of the eye, and it’s the concept I want people to think about rather than get bogged down in specific protocols.

I tend to view things loosely in a three layer aproach of “infrastructure”, “applications” and “objects” and the interfaces they need to link or be embedded.

In my view what we need is a “backplane” architecture in the levels so that parts can be “pulled & replaced” quickly and easily, without the need to recompile / rebuild / rearchitect.

Such views put me at odds with a number of segments of the software industry, but not other engineering fields of endevor where such thinking is so much the norm that it’s treated as a given.

I could go on at length but it would probably make your eye lids droop 😉

Thoth March 15, 2016 10:49 PM

@Dirk Praet
XMPP and OTR are actually rather complex stuff. I wouldn’t touch them with a 10 feet pole either. I would prefer something more binary and much more simple like a binary based protocol that is as close to using binary tags, lengths and fix positions. If you followed @Markus Otella’s attempt to implement OTR on his TFC setup, there was a point the traffic was so bloated it was treated as spam although to be fair I believe he used One-Time Pads ? Maybe @Markus Otella could correct me if I am wrong.

For XMPP, a message is in XML format and that is a bloat of space where a binary blob would have been more compact and faster.

OTR itself have weird ratcheting and the SMP protocol. I wouldn’t want such complexity when implementing on constraint ed embedded hardware. Neither will the Axolotl ratchet used by Signal work on embedded system due to huge complexity and bloat of it’s attempt to be PFS by embedding a public key in every exchanged message and constantly using ECDH crypto on every message rather than symmetric keying.

Both ratcheting schemes can be used for asymmetric crypto but mostly it’s the ECC based if you are looking at the Signal’s version of the ratchet. ECC is short and nice but it still doesn’t feel all too secure which I would prefer more of the good old RSA and DH but continuously inserting a per-message public key is going to eat up too much space considering embedded devices’ RAM memory are pretty small and they are slower on computation powers.

Neither XMPP nor OTR/Axolotl ratchet schemes would be suitable in my opinion. I am still trying to come up with a more compact binary messaging scheme which I am still working on to target highly constrained embedded devices.

Links:
https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html
https://github.com/trevp/axolotl/wiki

The Traveler March 15, 2016 11:48 PM

@Wake – that sounds like some backup plan if the all-writs thing goes south on them (no offense to southerners). Which can only mean that it’s just more circus, as it has nothing to do with the threat surface involving pgp and gmail AFAICT.

@Lisa – the tech industry has been getting by selling crap for many years, I don’t think they are afraid of getting killed off anytime soon. As if they aren’t in the too-big-to-fail class already.

Ron Williams March 15, 2016 11:54 PM

I would argue that the FBI should commend them. The whole point of encryption is to keep people from being able to decrypt the sensitive information. You’ll probably not hear much more about this except from concerned citizens like Schneier. These are the kinds of news stories that seem to disappear.

Spooky March 15, 2016 11:58 PM

I find it wryly amusing that no one seems ready to embrace the obvious solution: stop buying, carrying and using the devices in question. If the technology becomes permanently tainted by entitled government agencies, you should probably refrain from using it entirely. Rather than trying to perpetually stay one step ahead of an adversary with a limitless resources, simply disengage and step back. The best way to rebuff Washington’s authoritarian tendencies is by flushing most of Apple’s quarterly profits down the gutter. Somehow, we all did manage to live meaningful, socially-connected lives in the decades prior to global cell networks…

Clive Robinson March 16, 2016 12:02 AM

@ Wael,

Now comes the “political part” — which is not so easy to fix…

You mean ISO OSI seven layer add ons 8 through 11+ ( or whatever they are upto today 😉

I tend to think,

8, Managment…
9, Legislation…
10, Local political.
11, Local Societal.
12, Global political.
13, Global Societal.

However there appears to be an “inverse law of understanding” as you go up the levels.

King Cnut (Knut, Canute) is famed for his demonstration around 1030AD of the limits of power of Kings and mortal men. By showing his court sycophants that although he supposedly commanded the land he did not command the sea. It is something that the current crop of king makers, king whisperers and puppet kings such as presidents, Prime Ministers and their advisors appear to have forgoton.

Most here understand that it is relatively easy for humans to use a pencil and One Time Pad [1] to pre-cipher plaintext messages and then send the resulting ciphertext across open or backdoored communications systems securely. The use of a zippo or match then renders pad and message unrecoverable at the senders end, likewise the pad and ciphertext at the recipients end.

Provided the rules are followed then no third party can be compelled by AWA usage, nor National Security Letters, FISA courts. Not even the super computers or analysts at the NSA could help.

Likewise nor will any setting of president or new legislation.

In the UK this has been known about and importantly acknowledged in legislation for some time (RIPA). Where rather than uninvolved third parties, the first or second parties are given the choice of jail time or hand over the KeyMat. However there is a defence by which if you can –prove a negative and– demonstrate you nolonger have the KeyMat you will not be jailed.

As some will know there are ways this can be done technically. The UK legislators were warned of this prior to RIPA, but they chose to go with what was thought workable (but draconian). But this is not sufficient for the current crop of “faux Cnuts” like “Daft-Yid” Cameron and Theresa May (who had no clue as to what Meta-data is, nore cared). Their solution dubbed “The Snoopers Charter II” was going to make inocent third parties reveal the content of encrypted messages or suffer significant penalties. I’ve yet to seen if they have modified that position in the latest draft.

Politicians and legislators need to come to terms with the fact that legislation can no more make Pi equal to 3 than they can order the tide to stop. That is the laws of mathmatics and the laws of nature are beyond modification by the laws of man.

[1] Most hear also know that OTPs whilst secure for communications are in practice quite unwieldy due to the KeyMat issues.

Andy March 16, 2016 1:24 AM

@Clive Robinson,

I agree.

In my view, it is better to move intelligence as close to the user as possible, leaving the transport infrastructure as dumb as possible, only changing it for efficiency.

Rolf Weber March 16, 2016 2:03 AM

@Dirk Praet

If you insist on it, we can discuss it until the end. Here is what Bruce said:

This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability — one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications.

And this is plain wrong on 2 regards:

First there is nothing that needs to be “pushed onto the user’s device”. Even you said that the client doesn’t need to be patched. So you and me said basically the same, and we both contradicted Schneier, but instead of objecting to his plain wrong claim, you chose to discuss with me. And this is what I said: You show the behavior of a member of a religious sect.

The second thing is that it is not about “creating a new vulnerability”. The vulnerabilities already exist: The central server and clients that don’t check keys. All what is needed is to write an exploit for this vulnerabilities, and nobody knows whether WhatsApp already did it or not, but they could write it for sure any time, with or without the government’s or someone else’s request.

And here again, you also describe it as writing an exploit, so again you say basically the same as I, but instead of correcting Schneier’s plain wrong claim you start a discussion with me, who basically says the same as you. Religious sect, qed.

Wael March 16, 2016 2:47 AM

@Clive Robinson,

You mean ISO OSI seven layer add ons 8 through 11+

Yes, a new layer is born every few years. The number of layers doubles every 18 years 😉

Clive Robinson March 16, 2016 4:50 AM

@ Rolf Weber,

You say,

And this is plain wrong on 2 regards:

And then go on to claim the app already has a vulnerability in the client that can be already exploited.

However if we look back to what Bruce has said we see the following,

    In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages.

Which if other information is correct appears to be true (ie the keys are ephemeral and the company never had access to them).

Bruce goes on to say,

    This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability — one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications.

To which you object by say “And this is plain wrong…”

But this raises a question, if you are right then why would the FBI be heading to court to request changes be made, which is the premise of the original article?

Because if what you say is entirely true and possible in all uses of the application then the FBI has no need to goto court to get changes. But if the FBI can not do as you say for any reason at any future time then Bruce is right and the FBI will be seeking changes to the WhatsApp client…

Now I can see reasonably good reasons why the FBI might need changes to the client irrespective of the vulnarabilities you claim, as I’m sure others can after a few minutes thought.

Now I don’t have access to WhatsApp’s code or protocols so I can not say that what is additionaly required is not there, but that brings us back to why the FBI want to go to court to get changes made but you argue they don’t need to. Unless you can come up with other technical arguments then the balance of probability is not currently with you.

But instead of doing what is required you appear to want to accuse others of some kind of conspiracy… and quite rudely so. The question thus becomes “Why?” what possible advantage would there be to having such a conspiracy? What would it gain those you accuse?

You need to be careful, lest others start to ask “What’s Rolf Weber’s motivation?”, “What does Rolf get out of it?”, “Why is Rolf taking this odd stance?”.

Rolf Weber March 16, 2016 4:56 AM

And we don’t live in scary times where governments want to reduce our security, we live in times were governments push back to irresponsible, Snowden-lies-inspired PR stunts of companies. And even if the government loses the Apple and WhatsApp cases before the courts, then Congress will stop this irresponsible nonsense. Sooner or later.

Rolf Weber March 16, 2016 5:06 AM

@Clive Robinson


But this raises a question, if you are right then why would the FBI be heading to court to request changes be made, which is the premise of the original article?

We don’t know for sure what the FBI really demanded or will demand, because the case is under seal, but those of us who know about the technology can assume with a certainty of 99% that they will demand that WhatsApp writes an exploit for a man-in-the-middle attack.

An exploit is no “change”, and it is an exploit on the WhatsApp server, no client need to be touched.

Clive Robinson March 16, 2016 5:53 AM

@ Skeptical,

If a thief needs to obtain both the compliance of a company and the assets and position of a nation-state to break into my flat[1]

This is a bad argument to use. Most governments for health and safety reasons have via zoning and other legislation and regulation a requirment for “First Responder” / “Firebrigade” keys, such that they can gain easy access to buildings etc.

They are thus like “city wide” master keys and have fairly frequently been used not just by criminals but others wishing to gain access to buildings and areas, including residential spaces.

So the compliance of both the government and lock company as well as the building owner / manager is already assured, the criminal just has to get a copy of a widely held key.

And that’s the problem with backdoors / frontdoors or any other technicaly similar changes, once they are put in place it’s only a question of time before the keys get coppied.

We’ve already seen this with the “TSA Luggage keys” where some one was actually stupid enough to put photos of them online.

The difference between tangible physical keys and intangible information keys is much greater than many would consider at first sight. Physical keys require not just locality to the place of attack but resources to carry out the attack and duplicate the keys. This places a natural limit on the level of harm that can be achived with physical keys. Information keys require neither locality or resources to attack, thus there is no natural limit on the level of harm that can be achived.

Which brings up the question of “balance” you and others like to talk about, but others talk about proportionality. I would argue that the TSA keys are way way to far out of any kind of balance that an ordinary person would consider fair on being aquainted with all the facts, thus are excessively disproportional.

Putting such weaknesses in all communications is way way way out of balance with any governmental or LEO –supposed– need, for the protection of citizens, it is way beyond excessively disproportional.

The Government when dealing with non terrorist activities makes risk assessed cases when dealing with the harms of death, injury and and distress. It puts a monetary value on human lives (around $1m/person) and other harms less so. In theory it balances these against the cost to society in an annualised or other normalised measure before taking action in the citizens intetests.

It’s become quite clear that when it comes to terrorism and other very very rare events various governments have faild in any way to consider the cost of harms not just to the economy but to individual citizens as well.

The cost of harms with attacks made with such information backdoors defies the normal natural limits of physicality and thus are beyond normalised measure. To see this compare the data lost via information transfer of the OPM records compared to what would have been possible for the same number of involved people trying to make and take physical copies in the same time period. What is the magnification factor? Probably up in the 10-100million range…

This is way past “unknown territory” in the usual actuarial way of assessing risk. Thus the balance point can not be identified in a meaningful manner thus no acceptable risk point, which makes all such dackdoor technology unacceptable risk.

[1] Hmm I thought the more usuall US term was “appartment” not “flat” as you hear in England.

Dirk Praet March 16, 2016 6:48 AM

@ Rolf Weber

First there is nothing that needs to be “pushed onto the user’s device”. Even you said that the client doesn’t need to be patched.

It does not need to be changed if it is technically feasible to do all of the governments bidding by a MITM-attack on the server side only. But since we don’t know what exactly has been asked and to what extent the server side exploit would also reflect on the client, it is perfectly conceivable that it would need to be patched too. An a priori claim that this would never be the case is thus pure speculation on your behalf.

The second thing is that it is not about “creating a new vulnerability”.

Yes it is, @Rolf. I’m very sorry we can’t get it through your thick skull that the new vulnerability here is the exploit the FBI wants WhatsApp to develop, not the fact that every app with a central key server in theory is vulnerable to a MITM-attack. That an exploit/operator interface for that purpose would already exist is again pure speculation on your behalf, and it would make zero sense for WhatsApp/Facebook to have created it at any time in the development of the product.

So as to your allegations that our host is a lying sack of sh*t, you’re factually wrong on one account with a second one impossible to ascertain today meaning either of you can be right or wrong.

An exploit is no “change”, and it is an exploit on the WhatsApp server

But it is, @Rolf, and one that completely compromises the security of millions of users if it falls into the wrong hands. You’re being willfully blind.

And we don’t live in scary times where governments want to reduce our security

That’s exactly what the result of such government mandated backdoors will be, whether it is intended or not. Which once again you refuse to see. And it is beyond me that as a German you completely fail to understand the ramifications of a government that has the power to pry into our lives 24/7 and for which no one can have any secrets.

As to your tone, we would very much appreciate that you turn it down a notch. Your recent comments have evolved from disingenuous and irritating to downright rude and offensive. I get it that you are frustrated that no one with even the slightest clue buys into the benign government and IC gospel according to Rolf Weber, but please go and vent that anger somewhere else. Perhaps you should consider getting a hobby or a girlfriend.

Jim March 16, 2016 7:06 AM

@ Rolf Weber

“And we don’t live in scary times where governments want to reduce our security, we live in times were governments push back to irresponsible, Snowden-lies-inspired PR stunts of companies. And even if the government loses the Apple and WhatsApp cases before the courts, then Congress will stop this irresponsible nonsense. Sooner or later.”

You may be up to something over there. I for one am surprised they bend over backwards to satisfy every telemarketer needs while fighting the gov on these minor nuissances and made a big fuss about it. 😉

Jeroen March 16, 2016 7:13 AM

Stephan16498, just to chime in on how it works. WhatsApp Web requires your phone can be reached. In order to enable it, this must be authenticated from the phone. The authentications can be revoked from the phone.

Rolf March 16, 2016 7:24 AM

@Dirk Praet

It was Schneier who claimed that a client update would be necessary. I explained a way on how to get the cleartext without the need to touch the client. Schneier’s claim is simply wrong, technically wrong. Maybe I could choose nicer words, but that’s the plain facts.

And no, I will not explain you again the difference between vulnerability and exploit. Maybe you should search this site, there are good chances Schneier explained it before, when it was more “fitting”.

Jim March 16, 2016 7:29 AM

@ Stephan16498 saids,

“Third point – Remember WhatsApp was acquired by Facebook, and you think Facebook would allow end-to-end encryption, lose visibility into the messages, and forfeit the gold mine of personal data that they paid $19 billion for?”

No I would not think so, because it wouldn’t be a good business model.

“WhatsApp, end-to-end encrypted, who says so? Only Moxie Marlinspike claimed so”

If nobody knows for sure, then Whatsapp has done a reasonably well job of securing the application. Which brings to Rolf Weber’s claims. It’s kind of interesting, but I doubt we’ll get a straight answer from any experts on here because they are not obliged to do so.

“I completely challenge the unsubstantiated assumption that WhatsApp is end-to-end encrypted. Moxie Marlinspike gained publicity, and WhatsApp gained a myth. But this is one big honeypot that everybody who care about their privacy should avoid. The FBI/NSA have probably secured a backdoor already through a National Security Letter with gag order, so you’ll never know. But the fact that WhatsApp never confirmed the encryption is just one big, dead canary that speaks for itself.”

The way these upscale “instant” messengers behave is like an extention to cellphone. FB still has a separate Mesenger app on my cell so perhaps they can correlate my convos with the Whatsapp. I don’t know except wherever you install it the instant messengers are locked to your cell # and whatever advertiser identifiers may have been planted on it. So, I’m not too worried about it.

Dirk Praet March 16, 2016 8:05 AM

@ Rolf Weber

And no, I will not explain you again the difference between vulnerability and exploit.

Excuse me? YOU are going to explain to me the difference between a vulnerability and an exploit? Unlike you, I very well know what I’m talking about, so I don’t think so.

I’ve been working in IT security for about 20 years now, and before I let most of them expire I held the following certifications: CISSP, Red Hat Certified Engineer, Solaris Certified Systems, Network and Security Administrator (including TSOL), MCSE+Messaging, MCDBA, CCNA, CHE, CHFI, CCSA, CCSE, Prince2, ITIL, 5 or 6 CompTIA’s plus a series of others I’m too lazy to look up again.

My knowledge and experience in the field may be dwarfed by that of @Clive and some others on this blog, but WTF were your credentials again?

Clive Robinson March 16, 2016 8:41 AM

@ Rolf,

I explained a way on how to get the cleartext without the need to touch the client.

As far as I can see you’ve made a suggestion nothing more, on I presume very limited information as Facebook / WhatsApp have provided little information.

Further the nature of the attack you suggest is for most people and the FBI included not that practical to use for full surveillance.

The exploits the FBI have been found to use mostly work on modifying the client to covertly report back unique identifing information either IP or location information for various legal reasons. Even if WhatsApp did report such information back from the client, I have doubts that it would be sufficiently robust for evidentiary purposes for criminal prosecution.

As for,

… we live in times were governments push back to irresponsible, Snowden-lies-inspired PR stunts of companies.

Oh dear I asked you above to consider three questions people might ask about you. You appear not have done so.

You also in the past have made an easy to follow trail back to your employer, have you actually read your work contract? You know the bit about bringing your employer into “disrepute”…

Being disrespectful and rude to people on an open forum is never wise and can be illegal in a number of jurisdictions. Further people that have done a lot less have discovered its not profitable for them as it has had negative social and employment implications for them. Thus I realy urge you as I have before to consider what you are saying and doing, it is most unwise to put it politely.

Rolf Weber March 16, 2016 9:09 AM

@Clive Robinson

In my first comment here, I linked to my blogpost where I described my suggestion in more detail. But I forgot you don’t read stuff on Google+, so
here is the cut’n’paste:

First let me highlight a fact that the “tech experts” don’t tell you: Most users, most clients, who want to use the messengers, have dynamic IP addresses, are behind firewalls, proxies or NAT-gateways, and are often offline. This means: Other clients cannot connect directly to them, which in turn means any messenger that aims to be usable by ordinary users necessarily needs a central gateway. All users connect to the central gateway, which then connects the clients to each other (and stores messages if a user is offline).

The central gateway is in the middle, where always, you guessed it, the so-called “man-in-the-middle” attack is possible. Always.

The only thing that can prevent from the man-in-the-middle attack is that the client checks that the other client’s public key changed, alerts him and he doesn’t send the message (or sends a fake message). Let’s explain this on an example: Bob sends Alice a WhatsApp message. It’s their first exchange, so Bob sends Alice his public key, and Alice to Bob hers. Both store each other’s public keys (and because WhatsApp wants to be user-friendly, this is all done in the background, neither Bob nor Alice will notice).
Now comes Eve, the eavesdropper, into the game. She has access to the central gateway, and she wants to intercept the messages between Bob and Alice. She has no access to Bob’s or Alice’s secret keys, so if she wants to be able to decrypt the messages, she has to replace Bob’s and Alice’s public keys by her own public key.

This is technically no problem for Eve, because she sits in the middle. But both Bob and Alice could check that their partner’s public key changed, and of course they could become suspicious that someone is eavesdropping on them. But here a lot of practical problems are hidden:

First, the software, or the client, of Bob and Alice had to alert them. As far as I know, neither WhatsApp nor iMessage do this (you know, I told you, popular messengers aim to be user-friendly, and most users would likely be bothered about such alerts, appearing every time a communication partner gets an new phone or resets his phone), at least not by default.

The second problem is, that even if users are alerted, most will likely ignore the warnings and assume that the partner got a new phone or so (of course this is much more unlikely if the user is a criminal or terrorist who may anticipate being monitored).

But it gets even better for law enforcement: The service provider of popular messengers like WhatsApp is also the one who writes the client software, and it is easy for them to assure that the client never alerts if their (“Eve’s”) public key is presented.

Of course it is possible that users use an alternate, or patched software that still alerts, or that they have additional software installed that monitor the key exchanges and warn them if something suspicious is going on. But likely only technically very skilled people will do this.

So to summarize, it is possible that service providers implement a surveillance interface, even if they use end-to-end encryption. And this interface wouldn’t make ordinary users less secure, because it can only be used by the service provider itself — who is anyway technically able to do it, as I explained. So all users should anyway at least anticipate the possibility.

It is not possible in a way that users are unable to notice that they are being monitored. Technically skilled and paranoid users could. But it would then be up to law enforcement to rate whether the surveillance target (and his communication partners) is such a technical skilled person, and decide whether they risk the monitoring or not.

As a final remark, all virus-scanning proxies with “SSL interception” enabled do basically the same. HTTPS is also end-to-end encryption. To be able to see the content and scan for viruses, the proxy needs to perform exactly the same man-in-the-middle attack. Where there is a will, there is a way.

Fred Silva March 16, 2016 9:09 AM

Brazilian government has nothing to do with prison of the Facebook’s vice-president for Latin-America. It was a judiciary order, from a local judge, from a small city.

OldFish March 16, 2016 10:01 AM

@Rolf

“Congress will stop this irresponsible nonsense. Sooner or later.”

Assuming that we have a SCOTUS with at least a modicum of respect for the BOR, Congress might discover that they lack that power. I do doubt the court’s character so it may be a toss up.

Skeptical March 16, 2016 10:15 AM

@Dirk:

Well, no, and it’s something pretty much the entire crypto community agrees upon. Either everyone is safe, or no one is. Claiming the opposite is a purely ideological stance that has no roots in reality. Over time on this blog, plenty of examples have been given of government backdoored stuff that eventually fell into the wrong hands.

“Either everyone is safe, or no one is.”

Such a slogan taken literally is absurd. Do you mean it to be taken literally?

Or is the reference to the “crypto community” an attempt to limit the slogan to the question of whether an encryption algorithm is mathematically sound and so to avoid extending the slogan to security generally, where it’s ridiculous?

If you mean the slogan literally and as applied to security generally, it reminds me of the hyperbole that surrounds conversations about terrorism. It approaches hysteria. It maps “safety” to a binary system and declares its distribution to be uniform.

Indeed even the “argument by spectacular anecdote”, so often present in conversations about terrorism, makes an appearance here. That a lawful intercept capability in the Greek telephone system was – for a period of time – accessed by unauthorized actors cannot be generalized merely by telling the story. “Look, a particular lawful intercept capability was once used by unauthorized actors. Therefore no lawful intercept capability is perfectly secure against unauthorized use, and therefore no system that includes such a capability can be called secure to any degree at all.”

It’s silly. Can we, just for once, be slightly realistic in speaking about these things?

There is no free lunch, and there will be genuine tradeoffs regardless of the policy choices made. But I think it would improve the clarity of the discussion if we discussed this in terms of actual tradeoffs – and not absolutist principles masquerading as scientific conclusions, or handwaving generalizations that are based on little more than an anecdote and a desire for the generalization to be true.

To me, that reads like “Hello, we are AT&T. We don’t give a rat’s *ss about your privacy and security because we are and always have been government minions”.

Those of us who view our protection against government abuse as institutional, legal, and cultural – not technological – view things differently. I am as astonished by those who think private ownership of firearms to be an effective check on government abuse as I am by those who think ubiquitous warrant-proof devices and services to be effective checks on government abuse.

Perhaps it’s an understandable bias for many to think that technology is a solution to a political problem. God knows such themes are trotted out regularly enough to make everyone feel good about devoting their lives to the profit of a company or to amassing prestige – sorry, to making the world better via [insert product here]. And hey – that’s just human nature. We’re mixed creatures, and sometimes manage to fool even ourselves with our rationalizations.

But let’s be a little more open to calling sacred cows what they really are.

Rolf Weber March 16, 2016 10:17 AM

@OldFish

To monitor suspects, backed by a court order, is well established since centuries and doesn’t conflict with fundamental rights.

Nick P March 16, 2016 12:41 PM

“Of course you are right criminals could use other messengers. But that’s intended. They simply should not be able to hide in the big mass.”

No, the FBI themselves say they need backdoors to catch and stop criminals. They present it as if backdoors in U.S. products will make encryption problem go away. They’ve consistently ignored counter that they’ll use other products in public debates going back to the 90’s. That’s why Bruce and others put together a list of world-wide encryption products… again… to counter their claim. They want both visibility and bypass of encryption in any target, which U.S. backdoors won’t give them.

“The man-in-the-middle vulnerability already exists, and the client doesn’t need to be changed (because the WhatsApp client currently doesn’t alert on certificate changes, it’s not even configurationable).”

If the vulnerability exists and they just want crypto MITM, then the argument is like you said about using an existing vulnerability to target a specific user. That actually doesn’t bother me as it’s as close to warranted, traditional surveillance as you can get. So long as it’s only used on this client. Thing is, there’s an implication in that: WhatsApp probably wouldn’t be allowed to eliminate the vulnerability for the duration of the court order. Insecurity would still be mandated. That’s a problem.

I’d be interested in a compromise on such a situation. I’d rather them (or NSA) use one of the endpoint attacks we know they have. If they’re not, we have to question why they’re fighting over the WhatsApp encryption. Most likely, they’re still working to set precedents to bypass the encryption with provider’s assistance. This is a simple case that would be a stepping stone. That’s how I see these battles with Feds even admitting it a bit for Apple case.

@ Skeptical

“And this highlights the larger point: Neither China nor Russia nor any other country is waiting upon the US to determine whether they will impose requirements on communication devices used inside their own countries.”

That’s not necessarily the case. There’s what they’ll do on their own and other things (or limits) they’ll do based on what other countries are doing. It’s negotiation leverage at play. We saw it with Windows source code where they didn’t want to show it to anyone. Then I think it was either China or Russia demanding it for “security evaluation” to show no backdoors. Once delivered, pressure mounted from all kinds of countries effectively saying “that’s not fair! Give it to us or we kick you out!” Companies pushed for it, too, saying they needed their own assurances and depended on quirky features needing inspection. Microsoft created an official source-sharing program as a result that shared it with something like 1,000 companies and a bunch of governments with NDA’s and such.

That’s just one example. There’s a few precedents for countries holding off on pushing for something until major players start doing it. Then they do. This is especially true if many are already pushing for it to various degrees but carefully.

“but we’re also going to cooperate with the government when, in good faith, we believe that we are ethically or legally obliged to do so.”

I like that you included that. I’d reject all cooperation with FBI, NSA, etc on ethical and quasi-legal grounds after reading their public statements to American people and Congress then the Snowden leaks. I say quasi-legal because secret courts, secret interpretations of law… all this police state shit means I don’t know what the law actually will be until I’m confronted with it probably followed by a secrecy order. Being a holdout that’s pro democracy, I definitely resist cooperation unless it was a warrant for one or more specific users I could target without affecting security of service or others.

As Rolf pointed out, WhatsApp may be an example like that. Complying should be a top consideration if they are. Closing the vulnerability follows later. Otherwise, they should resist to avoid precedents while also closing the vulnerability immediately. Legal and ethical considerations with both choices. I say “choose wisely” then be ready to live with it. 😉

@ Wael

Appreciate the mention. Yes, I told Bruce and everyone else what they’d do. Science says the best test of a model of the world is that it’s predictions are repeatedly true. My model predicted the media’s posturing and management of the Snowden leaks, the expanded use of NSA resources in civil cases, that they’d lie about both, and that they’d use odd-ball legal arguments to circumvent existing law or protections.

Stephan16498 March 16, 2016 12:49 PM

@Jeroen: this is authentication and permissions that you are talking about, not encryption. WhatsApp Web’s servers need to have the keys and manage them, acting as man in the middle, in order to let you display the messages on your desktop.

Again, this is about the bigger picture:
– $19 billion paid by Facebook to get user accounts and data,
– Poor app rating by EFF,
– Not a single public statement by WhatsApp (which in effect amounts to a “privacy canary” – they cannot afford to either confirm or deny it)

@Response: that screenshot is no public commitment by WhatsApp. What we hear instead is a resounding silence by WhatsApp, the company.

All things being equal, the simplest explanation is usually the best: even if encrypted, there is probably only one key for all users, as has been claimed by other comments – see @Mmm and https://technical.ly/brooklyn/2015/11/20/whatsapp-not-really-encrypting-messages/ – this explains everything, the $19b, the EFF rating, and WhatsApp’s silence. If WhatsApp can still read your messages, this is not end-to-end encryption. The FBI cannot decrypt message flows, no problem they will get a backdoor directly in WhatsApp’s infrastructure.

The biggest the scam, the more readily people will want to believe it. WhatsApp end-to-end encryption is just that.

vwm March 16, 2016 1:04 PM

@Rolf, have you ever actually tried to change the keys of a device during an WhatsApp communication? How do you know that the other device will accept that without flashing a warning / dropping communication / whatever?

Besides, it’s a straw-man argument: The central statement of our sect(1) is something along the lines: »It is dangerous [for everyone] to downgrade security properties [to catch criminals]«.

Even if you can prove that in some certain example some certain security feature might not need to be downgraded, as it was broken all the way along, that does not invalid our statement.

(1) that is, everyone disagreeing with you?

PS: Also, the statement does not depend on whether the downgrading happens on the server side or on the client side. We do not even have to consider if you have to push a whole different client app, or just a new pair of keys to the client.

vwm March 16, 2016 1:26 PM

@Stephan16498: For all we know, when you use web.whatsapp.com, traffic gets tunnelled through your phone (try turning the phone of or cutting it from the network. Web will stop working instantly). This is supposed to be required for decryption and possibly re-encryption.

I will not argue that WhatsApp can be considered as safe, it can not. I have no doubt FB and WhatsApp are probably monetising whatever they can — even if the encryption is sound, meta-data is valuable.

But it might not be as simple as you say it is.

CallMeLateFor Supper March 16, 2016 3:45 PM

@z
“[…] completely stunned that the government would pursue backdoors so publicly again”

It was preditable. Individuals who believe that there is middle ground between the strongest encrpytion and no encryption probably also believe that there is middle ground between pregnant and not pregnant. What do we learn from this?

But yes, of course… Comey & Co. really-really want it, and therefore they deserve to have it. Anything short of that is unfair. Young teens best understand truths such as these and that “no” never means “no, not ever”, it always means “ask again later”. And keep asking until all opponents nod off from boredom aggravated by frustration.

Someone please give Comey a magic pony.

Rolf Weber March 16, 2016 4:06 PM

@Nick


No, the FBI themselves say they need backdoors to catch and stop criminals.

I think the FBI has enough sense of reality to know that they can’t get access to any crypto. They just don’t want that criminals install the WhatsApp app or use an iPhone with a 4-digit PIN and are already secure from FBI access. If the criminals want to be secure from FBI access, they should be forced not to use a mass product, but do something else. That’s what I call they should not be able to hide in the big mass.


Thing is, there’s an implication in that: WhatsApp probably wouldn’t be allowed to eliminate the vulnerability for the duration of the court order. Insecurity would still be mandated. That’s a problem.

As I said, there are 2 vulnerabilities: The central server, and the client that doesn’t alert key changes.
The first vulnerability is inherent for a company like WhatsApp. I see no reasonable way how they could get rid of it.
The second vulnerability is more tricky. It would be easy to update the clients so that it alerts on key changes, at least that it’s a config option. But this would conflict with WhatsApp’s “easy-to-use” aim, so my guess is they simply don’t want it.


I’d rather them (or NSA) use one of the endpoint attacks we know they have.

Endpoint attacks are dangerous, there is a high risk of being detected. And in a legal grey zone. It’s a good option for foreign inteligence, but law enforcement should avoid it whenever possible.

@vwm


Rolf, have you ever actually tried to change the keys of a device during an WhatsApp communication? How do you know that the other device will accept that without flashing a warning / dropping communication / whatever?

I use WhatsApp since years, and I never got a warning, and I know that many of my communication partners frequently switches their phones (and I know that hardly anyone of them backuped and restored their WhatsApp data). I also experienced this:
https://plus.google.com/+RolfWeber/posts/cXMNhZtKLgM
where my friend wasn’t alerted either.


It is dangerous [for everyone] to downgrade security properties [to catch criminals]

So far, both in the Apple and the WhatsApp case, it is not about downgrading security properties. In both cases, it is about exploiting existing vulnerabilities.

EvilKiru March 16, 2016 5:44 PM

@ Rolf Weber:

So far, both in the Apple and the WhatsApp case, it is not about downgrading security properties. In both cases, it is about exploiting existing vulnerabilities.

If it’s not about downgrading security properties, then why does the FBI need Apple or WhatsApp to do anything special for them?

Dirk Praet March 16, 2016 6:19 PM

@ Skeptical

It’s silly. Can we, just for once, be slightly realistic in speaking about these things?

Either a device or particular piece of software is secure for everyone, or it isn’t. NOBUS has been debunked time and time and again, so yes, can we please for once be slightly realistic about these things?

I am as astonished by those who think private ownership of firearms to be an effective check on government abuse as I am by those who think ubiquitous warrant-proof devices and services to be effective checks on government abuse.

I can only reiterate my opinion that the exact conditions, methods and modus operandi for government subversion of contemporary electronic devices and communications needs to be thoroughly discussed in Congress and then voted upon instead of the executive branch seizing such unprecedented powers under a 200 year old statute. And which is also what Apple and most of the rest of the tech sector are asking.

Such an approach, in view of the potentially huge consequences of what the FBI is asking for, is entirely reasonable and I can only regret that you don’t see it that way too. If the government gets what it wants and the new piece of legislation passes constitutional scrutiny, then so be it and many of us will just move to non-US products. At which time the US tech sector can still convince Congress to outlaw such products all together and under TT(I)P try and sue foreign companies providing non-backdoored technology over “unfair competition”.

@ Rolf Weber

my credentials are the technical facts.

You are, as usual, conflating your opinion with facts.

If the criminals want to be secure from FBI access, they should be forced not to use a mass product, but do something else.

So what happens when a mass product gets subverted by the government? Scores of users – both ordinary citizens and criminals – eventually move on to something else that isn’t. And which in its turn becomes a mass product. Long term benefits for LE: none. All criminals have moved on, and those stupid enough not to were probably also dumb enough to screw up before the government mandated backdoors in the first place. There’s only victims here, i.e. the vendor of the original mass product and its remaining now less secure users.

So far, both in the Apple and the WhatsApp case, it is not about downgrading security properties. In both cases, it is about exploiting existing vulnerabilities.

For $DEITY’s sake, man. Doesn’t it then even remotely occur to you that the exact purpose of an exploit is to breach, and thus downgrade security? And what are you babbling about a 2nd vulnerability on the client side? The absence of a key exchange checker UI is not a vulnerability, but a missing security control to mitigate a possible vulnerability on the server side. It’s only a vulnerability if it can be exploited somehow on the client side alone.

Once again, you are either showing off complete ignorance of basic security definitions and concepts, or you are deliberately distorting them to fit your Einzelgänger narrative. Neither of which reflect well on your credibility.

Sancho_P March 16, 2016 6:54 PM

@Skeptical (15, 10:41 AM), @Nick_P,
re ‘the USA is the world’s leader in liberty’:

”And this highlights the larger point: Neither China nor Russia nor any other country is waiting upon the US to determine whether they will impose requirements on communication devices used inside their own countries.” [Skeptical]

Yes, exactly. But they will follow the US – because they have to.
Compel US companies and empower oppressive regimes, including the USA, your enemy.

Dan2 March 16, 2016 7:36 PM

@ vwm

“For all we know, when you use web.whatsapp.com, traffic gets tunnelled through your phone ”

“WhatsApp communication between your phone and our server is encrypted.”
https://www.whatsapp.com/faq/en/general/21864047

It doesn’t define end-to-end as from phone to phone, and no way the desktop app is tunneled thru your phone it may need some kind of prefetch key from it in compliance to TextSecure but the phones arent generally very responsive to non-carrier requests. This whole thing is very confusing to me.

Is there a URL I missed?

Dirk Praet March 16, 2016 8:48 PM

@ Thoth

XMPP and OTR are actually rather complex stuff. I wouldn’t touch them with a 10 feet pole either. I would prefer something more binary and much more simple like a binary based protocol that is as close to using binary tags, lengths and fix positions.

No argument here. I just pointed them out because they are readily available on quite some platforms, configuration thereof within the grasp of the average layman while still being less insecure than other, more popular platforms like Skype and the like. I seem to remember that somewhere in the Snowden docs the NSA didn’t seem to like OTR a lot at the time, so that’s a plus.

And I conflated @Anura with @Markus Otella. I found Markus’s PoC pretty impressive.

@ Nick P.

I’d be interested in a compromise on such a situation. I’d rather them (or NSA) use one of the endpoint attacks we know they have.

The reason why they don’t probably has everything to do with the fact that in a court of law the defense might ask exactly how they have been able to unlock/decrypt a device. The government may then be compelled to reveal the exploit/endpoint attack used which would then end up being patched. After a few cases, they run out of exploits and are basically screwed. Forcing the issue in a legal way kinda makes sense in such a context, and no party would have to give up any exploits they’re sitting on.

Nick P March 16, 2016 9:02 PM

@ Rolf Weber

“I think the FBI has enough sense of reality to know that they can’t get access to any crypto.”

Their own data said they rarely encounter crypto and usually don’t need it for conviction. Their public comments are polar opposites. Their trustworthiness is already low per source evaluation criteria they and CIA use. Let’s keep looking at it, though.

” If the criminals want to be secure from FBI access, they should be forced not to use a mass product, but do something else.”

Wow, I can’t believe you aid that. Criminals are using crypto to send voice, text, and video. The same thing innocent people do and whose compromise leads to real harms of all kinds. Given there’s no evil bit, we can’t distinguish ahead of time which messages are malicious. So, to block criminals from mass products, you’d have to block everyone from mass products with encryption. That’s why the opposite is better given so few are criminals: a numbers game where it protects us many times more than hurts us.

“The first vulnerability is inherent for a company like WhatsApp. I see no reasonable way how they could get rid of it.”

The way it’s been done since INFOSEC was invented and standardized: selective sharing of source with third parties that vet it in an evaluated configuration and post hashes of what they found trustworthy. They do this for updates, as well. You download the update, hash it, visually check it against others, and apply it if passes.

I doubt they will do this but I’m just saying they could. Probably cheaply if they use individual developers (esp ideological) with established rep instead of certification lab. They don’t care, though. Too bad given the acquisition amount means they have funding to make a high-security, usable app.

“Endpoint attacks are dangerous, there is a high risk of being detected. And in a legal grey zone. It’s a good option for foreign inteligence, but law enforcement should avoid it whenever possible.”

I see the argument. Point being there’s a difference between “we can’t do this thanks to X” and “we can do this but don’t due to some risks and prefer to increase our legal power overtime outside of Congress through such deceptions.” World of difference. The first thing is what they’re telling the public while they’re apparently doing the latter. Another reason to reject any power increase or unusual access they want without thorough review of the what and the why.

@ Dirk Praet

“The government may then be compelled to reveal the exploit/endpoint attack used which would then end up being patched. After a few cases, they run out of exploits and are basically screwed. ”

That makes sense. That also sounds like a funny way to reduce the number of people they try to lock up in event criminals adopt widespread crypto. In any case, I think compelling a specific user to produce a password for specific devices or services seems like best route in these things in long-term. Makes them target suspects instead of carriers. Suspects also know the search is happening. They still have opportunities to compartmentalize. We can all still use strongest security available.

I recommended in my petition to the White House that they push for forced disclosure of individual keys via warrants over backdoors as a compromise. People worried about that can still (a) meet face to face like Mafia does, (b) communicate with trusted couriers like Bin Laden did, (c) use custom software on more open hardware, or (d) layer crypto stuff on top of whatever crowd uses to blend in as Clive pointed out.

Clive Robinson March 17, 2016 12:37 AM

@ Nick P,

With regards your,

Wow, I can’t believe you said that.

comment on Rolf Weber’s,

    If the criminals want to be secure from FBI access, they should be forced not to use a mass product, but do something else.

I could not believe it either…

Because it’s based on a false assumption Rolf has made that criminals can or will move to something else rather than move the secure end points of the communications channel…

@ Rolf Weber,

I’ll say it again and hopefully this time you will understand it, if not I’m sure most others will.

If the FBI get to make the iPhone or any other smart phone insecure, that will not force criminals to stop using such a “mass product”.

Because the criminals or anybody else for that matter can move the security end point past the smart phone, and there is nothing the FBI can do to stop them doing so.

To see why consider the case of the One Time Pad. The criminal writes the plaintext into the pad does the addative encryption then types the resulting ciphertext into the smartphone. All the FBI or NSA for that matter get is a ciphertext they can not use.

And before you start in on the KeyMat issue of an OTP there are other Ciphers and Codes a criminal could use with a similar level of security.

I suspect that one of the reasons both the CIA and NSA are “briefing against” Comey and the FBI is that they can see slightly further into the future than Comey can. They quite rightly regard him as being a “dangerous idiot” who is hell bent on making their lives more difficult and the FBI’s job impossible.

The reason is that it won’t only be criminals moving the secure end points politicians and businesses will as well. Thus as far as “message content” goes it will “go dark” for all of them. But there is still the issue of traffic analysis. Although traffic analysis will provide actionable “intelligence” for the NSA,and CIA it won’t provide legaly acceptable “evidence” in most cases for the FBI (it’s the same issue as with Stingrays). Because both criminal lawyers and the smarter criminals are aware of this these days and will push the envelope on discovery way beyond the point that “parallel construction” will cover things up.

Thus Jim Comey is seen as a “dangerous idiot” because he is in effect “killing the golden goose” by making the “going dark” issue considerably worse not stopping it.

Rolf Weber March 17, 2016 4:50 AM

@EvilKiru


If it’s not about downgrading security properties, then why does the FBI need Apple or WhatsApp to do anything special for them?

They don’t need to do any “special” for them. They just should write exploits for known vulnerabilities. And the FBI could do it by its own, if the companies hand over source code and keys, and grant access to servers. But it’s likely better for both if it’s done by the companies.

@Dirk Praet


Either a device or particular piece of software is secure for everyone, or it isn’t. NOBUS has been debunked time and time and again

NOBUS so far has only been discussed under the premise that the “us” is a third party, like the NSA or the FBI. But that’s not the case for “backdoors” like I propose, where the “us” is not a third party, but the manufacturer (like Apple) or the service provider (like WhatsApp). And regarding this, the observations so far has clearly shown 2 points:

  1. The manufacturers and service providers are already able to decrypt, because most of their “security” is based on obsurity.
  2. It is possible to implement secure “backdoors”. For example smartphone manufacturers could implement it in a way that only they themselve, only on their premises, and only if in physical possession of the phone, can unlock it. This wouldn’t put anyone at risk (and, BTW, had the good side effect that a phone is unlockable if the user forgets his passphrase or dies).


So what happens when a mass product gets subverted by the government? Scores of users – both ordinary citizens and criminals – eventually move on to something else that isn’t.

I don’t speak about a government subversion. I speak about the situation that the government can obtain a warrant and go with it to manufacturers or service providers to demand specific data. Why should I be scared of this? I’m used to the fact that my government can wiretap me if they convince a judge that it’s necessary.


Doesn’t it then even remotely occur to you that the exact purpose of an exploit is to breach, and thus downgrade security?

You still confuse vulnerabilities with expoits. They really should add this to the MCSE training. (SCNR)

@Nick


Their own data said they rarely encounter crypto and usually don’t need it for conviction.

But sometimes they do, like in the San Bernardino case. And then they should be able to decrypt, at least when the target uses the default software and default encryption.


So, to block criminals from mass products, you’d have to block everyone from mass products with encryption.

Maybe here is some misunderstanding. I don’t want to block criminals from using a “backdoored” WhatsApp. Quite the opposite, I welcome it, because then it is pretty easy to wirtetap them (and this will always happen, because criminals are not perfect, most of them make mistakes). I just don’t want that criminals are already “FBI-proof” with simply using mass products like WhatsApp or an iPhone with a 4-digit PIN.


That’s why the opposite is better given so few are criminals: a numbers game where it protects us many times more than hurts us.

What I propose wouldn’t hurt anybody. All users would be safe. Only targeted surveillance would be possible, only when the government compels the service provider to do it. We already live with this situation for decades.


The way it’s been done since INFOSEC was invented and standardized: selective sharing of source with third parties that vet it in an evaluated configuration and post hashes of what they found trustworthy.

I agree it is possible, but companies had to rely on others, and/or share their precious with them. I hardly believe WhatsApp would do it, let alone Apple.


Another reason to reject any power increase or unusual access they want without thorough review of the what and the why.

Again, I don’t demand increased or unusual access. My argumentation is that warranted access, like it is well established since centuries, is perfectly implementable even with strong encryption.

@Clive Robinson


Because it’s based on a false assumption Rolf has made that criminals can or will move to something else rather than move the secure end points of the communications channel…

Believe me, I understood what you said, but it makes no difference for my argumentation. It’s anyway whether criminals use another “unbreakable” messenger or if they add their own encryption layer to WhatsApp. I know that enbreakable crypto is out of the tube, so a sophisticated criminal is of course able to encrypt his communications so that even the FBI with a warrant or the NSA can’t decrypt. And this will always be the case, and I don’t mind about this. It’s a matter of fact. But what I don’t want is that “unbreakable crypto” will become the default for everybody.


To see why consider the case of the One Time Pad.

I know this argumentation too. You say that that it will be a competition, and that each time the FBI exploits a vulnerability (or forces companies to exploit for them), the companies in turn will close this vulnerability, and at the end, the system will be perfectly unbreakable and nobody has access any more. I see the point, but I largely disagree.

First, as a general point, it is good if vulnerabilities are closed. That’s how security research works, so I don’t have any problem with this.

However, my view is that some “vulnerabilities” are very hard to close, and to close them comes with a price: convenience and user experience. This is the case if you see it as a vulnerability if even the manufacturer or service provider is absolutely unable to decrypt. At least I don’t see a way how to do this without a heavily decreased convenience and user experience. And that’s why I strongly doubt that companies will go this way.

PGP is one good example. It is, at least I would rate it so, unbreakable. I had no idea how the FBI could force anybody to assist in decrypting PGP messages. But PGP is a pain-in-the-ass, so nobody uses it.
Another example is your One Time Pad. It’s not suited for practical use.

Rolf Weber March 17, 2016 5:09 AM

“This is the case if you see it as a vulnerability if even the manufacturer or service provider is absolutely unable to decrypt.”

Sorry, of course there is a “not” missing:

This is the case if you see it as a vulnerability if even the manufacturer or service provider is not absolutely unable to decrypt.

Z.Lozinski March 17, 2016 6:39 AM

How many people here remember the CALEA wars of the 1990s? (Well, 1990-2006)

I think this request is the first phase of let’s call it “CALEA-2”.

About the same time as First Crypto Wars, there was another initiative, driven by the FBI. CALEA (the Communications Assistance for Law Enforcement Act, 47 USC 1001-1010) came about because telecom exchanges were shifting from analog to digital technology, and there was a concern that “traditional’ wiretapping would no longer work.

Some background, which is relevant to the reported request for a WhatsApp backdoor. In every major country, one of the conditions of holding a telecommunications license is to provide assistance to law enforcement. The generic term used is lawful interception (LI). The idea is that LI is only available, after presentation of warrant, through a secure interface, and is subject to audit. In the UK the Parliamentary Commissioner reports the number to warrants issued each year to the House of Commons. All fixed line and mobile telecom network equipment implement LI. And yes, like any backdoor it can be compromised, c.f. “The Athens Affair”.

In the 1990s, the FBI asked the US telecom operators to implement LI facilities for the new digital switches. The CALEA Law is passed. The telecom industry evaluated the cost (measured in USD billions) and politely declined to implement it. There followed a lobbying war in Washington. At the end of this, a fund was set up in the late 1990s to pay for the upgrades to the digital switches. From memory I think AT&T (then a switch maker, before the Lucent trivestiture) and Nortel were paid about USD 450M each to cover the cost of designing and implement the required upgrades across the USA.

Currently, there are no telecommunication licenses required for over the top services. (OK, there are the odd exception in developing countries).

Remember, one of the reasons Facebook paid USD 18B for WhatsApp, is that mobile internet based messaging is predicted to overtake SMS.

The law enforcement community want access to these new forms of communications. The new service providers don’t want to carry the cost of implementation. Some new service providers and vendors have strong beliefs about the right to privacy. The new service providers don’t holdlicenses by which they can be compelled.

What has changed since CALEA 1, is a) just how much of people’s lives is now conducted through mobile devices, and what expectations of privacy they have, and b) how just how much of people’s lives is now conducted through mobile devices, and what expectations of security and integrity they have.

Ultimately, we need an informed public debate on this, in all the affected countries.

Ken2 March 17, 2016 8:30 AM

@ Z.Lozinski

“Remember, one of the reasons Facebook paid USD 18B for WhatsApp, is that mobile internet based messaging is predicted to overtake SMS.”

Great post, Sir. and this is one of the areas where the U.S. is falling behind. We’re still working out standards, so that we can push it to the rest of the world, but the rest just went ahead and did it. This is a different scenerio than the electronic switches of years past.

and we’re still stuck on the end-to-end debate, or lack of.

Dirk Praet March 17, 2016 10:38 AM

@ Rolf Weber

NOBUS so far has only been discussed under the premise that the “us” is a third party, like the NSA or the FBI. But that’s not the case for “backdoors” like I propose, where the “us” is not a third party, but the manufacturer (like Apple) or the service provider (like WhatsApp).

Interesting but pointless artificial distinction. NOBUS has been and still is widespread amoung countless vendors and manufacturers too. The best known example are hardcoded service passwords being found out about on a regular basis by pen testers and security researchers reverse engineering stuff. Whether a backdoor has been introduced by a TLA or the vendor itself for all practical purposes doesn’t make any difference whatsoever for an adversary. Or to put it bluntly: “A hole is a hole”.

The manufacturers and service providers are already able to decrypt, because most of their “security” is based on obsurity.

It keeps getting better. While it is true that any “trusted” gateway nowadays should be considered a potential security hazard by design, such a design has absolutely nothing to do with obscurity. And when companies incorporate encryption in their products or services, the general idea is to protect them from prying eyes both inside and outside the company, not to become peeping Toms either themselves or on behalf of someone else. What your implying with this ridiculous statement is the exact opposite.

It is possible to implement secure “backdoors”

No, it isn’t, Rolf. Ask any real security professional or cryptographer. You are simply reiterating the NOBUS fallacy. And even in mitigating under the conditions you describe, it would be placing an unreasonable burden on companies like Apple, not to mention putting the privacy and even lives of countless people in authoritarian countries at stake.

Can you please explain to me how you will handle government requests from, say Saudi Arabia, Turkey, China, Syria and the like when they demand access to a known backdoored product or service? As long as there are no backdoors whatsoever, at least a company can put up a credible legal defense, but which goes entirely out the door once they are there.

Any company operating under their jurisdiction could not but comply or discontinue their operation. What are you going to tell either those companies or the targets of such warrants? That it’s just tough luck that they need to either comply with “lawful” government demands and put peoples lives at stake or forfeit their revenues? And to activists that they just chose the wrong country to oppose the government?

But, yes, such people can use other products and services too. Goodbye Facebook, Google, Apple, Microsoft, Twitter, WhatsApp etc. As per your own reasoning, they’ll need to revert to alternatives that will make them stand out more, in the process making other people already using these services suspect too. As a law abiding German citizen with nothing to hide, none of this of course is of any concern to you. But not everyone is you and not every country is Germany.

Bottom line: mandated backdoors are a stupid idea, and from more than one angle. The price for catching the odd terrorist or pedophile is making the entire world less secure, even to the point that it will be putting the lives and livelihoods of countless perfectly innocent people at stake. You may find that acceptable, I don’t. And neither do the United Nations.

You still confuse vulnerabilities with expoits.

Whatever, Rolf. It’s perfectly clear to all of us that you just don’t understand the terminology, and which you show again in saying “This is the case if you see it as a vulnerability if even the manufacturer or service provider is absolutely unable to decrypt”. That’s not a vulnerability, that’s a deliberate security control, and the exact opposite of it.

However, my view is that some “vulnerabilities” are very hard to close, and to close them comes with a price: convenience and user experience.

Again a very personal view with no roots in real life. The prime reason some vulnerabilities are hard to close is for compatibility purposes, not for convenience or user experience.

At least I don’t see a way how to do this without a heavily decreased convenience and user experience. And that’s why I strongly doubt that companies will go this way.

Poppycock. Try Signal and Protonmail out some time. Oh yes, and good luck mandating backdoors in end-to-end encryption security architectures specifically designed to avoid MITM attacks and on top of that located in a jurisdiction outside of the US. Companies and individuals alike will eventually move on.

But PGP is a pain-in-the-ass, so nobody uses it. Another example is your One Time Pad. It’s not suited for practical use.

Both still are excellent tools if you learn how to use them properly. Motivation for which mostly depends on your intellectual capabilities and how secure you want your communications to be. They’re still very popular with old-school cypherpunks.

What I propose wouldn’t hurt anybody.

Tell that to a Congolese opposition leader or a Saudi group fighting for the right of women to drive a car.

Only targeted surveillance would be possible, only when the government compels the service provider to do it.

Until such a time that the FISA issues a single warrant for all devices that have been shown to make phone calls with foreign entities.

Rolf Weber March 17, 2016 3:43 PM

@Dirk Praet


The best known example are hardcoded service passwords

Yawn. This is not how the proposed “backdoor” would look like. The “backdoor” would be secured with strong crypto. If you deny that this is possible with crypto, you deny that crypto can secure you in the first place.


While it is true that any “trusted” gateway nowadays should be considered a potential security hazard by design, such a design has absolutely nothing to do with obscurity.

The obscurity is that WhatsApp says that they cannot read the messages, while it is true that they easily can, if they only want. Their “security” is entirely built on their promise that they won’t perform a MITM.

And Apples’s “security” (like the erase after 10 failed attempts, or the delayed retries) entirely relies on their closed-source iOS software and their locked bootloader.


Ask any real security professional or cryptographer.

I assume you mean the same “experts” that still believe in and repeat the Snowden fairy tales?


Can you please explain to me how you will handle government requests from, say Saudi Arabia, Turkey, China, Syria and the like when they demand access to a known backdoored product or service?

First, I have absolutely no trust that Apple doesn’t already assists in these countries. I don’t think these regimes could compel companies like Apple or WhatsApp, but they could push with threating sale prohibitions or blocking of services. In any case, the autoritarian regimes will for sure not wait on a “precedent” in the U.S.

Second, since they cannot compel, they had to ask for legal help from th U.S. government. It wouldn’t be up to Apple or WhatsApp to decide. And I’m confident the U.S. government wouldn’t grant assistance against dissidents.


Oh yes, and good luck mandating backdoors in end-to-end encryption security architectures specifically designed to avoid MITM attacks and on top of that located in a jurisdiction outside of the US.

If such a service will ever attrackt a significant number of customers, it will for sure attrackt the government of this foreign jurisdiction as well — and little countries on earth provide as much legal protection and civil liberties as the U.S. You are very naive if you believe this would be a win for privacy.


Try Signal and Protonmail out some time.

Signal is basically the same as WhatsApp, safe that it does the key change checks. But exactly this is one reason why it will never get hundreds of millions of customers, because 99% of the users don’t appreciate being annoyed with keys.
And Protonmail? Password forgotten, emails lost. Forget it. The nerds may love it, but nobody else.


They’re still very popular with old-school cypherpunks.

Sure.

Thoth March 17, 2016 6:16 PM

@Rolf Weber, Clive Robinson, Dirk Praet, Nick P
re: Backdoors can be securely implemented.
What are your schemes and proof of concepts for secure backdoors. They need to include features to stave off request from China, Russia and the other Governments except those you want to provide NOBUS access. Do you have codes or hardware samples for us to see and believe that secure backdoor with NOBUS access can be provided ?

re: PGP pain in the bottoms
Mailpile and many other PGP plugins from Yahoo and Google for web browsers are getting more common. Mailvelope with browser plugin for Firefox and Chrome can be used for those eho want easier and less harsh experience with PGP.

re:Hardcoded passwords
Since these passwords can allow unintended access and were not planned as proper access they are still considered backdoor of unintended consequences and accidental of sorts.

re:Yawn
Please be more respectful to Dirk Praet.

Links:
https://www.mailpile.is/
https://www.mailvelope.com/

Sancho_P March 17, 2016 6:23 PM

@Dirk Praet (re Rolf Weber’s ”secure backdoors”)

What’s called a “backdoor” (one key for all, secret access) isn’t secure per se, so we don’t have to discuss that.
Also the “would be secured with strong crypto”, because it’s useless to put a ‘secure strong crypto’ onto a pile of insecurity and bugs.

But the point of “lawful access”, different countries (cultures) and legislative, combined with worldwide one type of device is worth a second thought.
From several products we know of differences (up to prohibition) depending on the local legislation.
To have one (mobile ! ) device for all_of _us would be a blessing which might have slipped through only until now.

Often the US have been the trailblazer, the leader of progress.
In this case they may be trailblazers, too, but backwards.

Clive Robinson March 17, 2016 7:54 PM

@ Thoth,

What are your schemes and proof of concepts for secure backdoors.

@ Nick P and myself have discussed this in part in the past, and my take is backdoors are unworkable.

There are many ways you might attempt to make backdoors, and I’ve thought about quite a few. However the one thing they have in common is they all have faults one way or another or their use is precluded operationaly in common circumstances.

I won’t go through all the reasoning but give you the highlights from which you can easily see the rest.

The first question you have to ask is a biggie which is not how but “Where, what and when?” you are going to leak information, then “How you are going to ‘apply’ / use it?” before finaly asking “By what method are you going to accomplish the leak?”.

Operationally, you want as much as possible to avoid “tipping off” therefor you preferably do not want to leak from a device under the control of the first or second party to the communications. Because you can not stop them from instrumenting the communications interface in some manner that would reveal either or both the leakage or enabling / control system.

Thus you would prefer if possible the “Where” of the leak to be some point upstream of the communications endpoints to or three hops removed from the first and second parties. This gives the basic options of somehow attacking a mid point server or some kind of man in the middle attack.

A reasonable design of endpoint would preclude all current mid point attacks by use of some kind of shared secret. The problem for the the first and second parties is establishing the secret in the first place and keeping it from any and all third parties.

There are established ways to do this if the first and second parties are actually known to each other or can establish a trusted side channel. For most criminals / terrorists / etc establishing a shared secret is not realy much of an issue.

Where it is an issue is the “instant gratification of eCommerce”, which gives a big clue as to who such mid point attacks would be focussed against contrary to the statments of those who want to force such information leakage.

Thus having accepted a properly designed and used system is in effect immune to mid point attacks, you are forced to go for the end points for your backdoor.

And this is where it all goes horribly wrong and why it is guaranteed that such backdoor methods will not only become known, but instrumented, investigated and have a very high probability of being either reverse engineered or easily rendered unusable.

As I’ve also said in the past the only way to avoid “Tipping Off” is by ensuring all end points leak information at all times. Which is the method chosen by Micro$haft with Windoze 10, which means that the potential for harm is so large as to be virtualy unquantifiable in any meaningful measure.

However any semi-intelligent criminal is going to be well aware of this continuous information leakage, and will thus move the communications end points beyond the reach of such a backdoor. It does not matter if it is a pencil and paper One Time Pad or energy gaped device, it puts the message content beyond the reach of the NSA, CIA, FBI, DOJ etc etc.

Whilst the NSA and CIA still have the fallback of “Traffic Analysis” because they are looking for “actionable intelligence”, the same is not true for the FBI or DOJ because they are looking for “admissible evidence” which the result of Traffic Analysis rarely is, because it does not meet the required “burden of proof” for criminal prosecutions.

I hope that covers what you are looking for.

Dirk Praet March 17, 2016 8:11 PM

@ Nick P.

I recommended in my petition to the White House that they push for forced disclosure of individual keys via warrants over backdoors as a compromise.

RIPA in the UK already has that. But there are a number of drawbacks, for example when as in the SB case the owner is dead. And what to do when the owner refuses to give up the keys, even under duress? What if he has genuinely forgotten his password? Indefinite incarceration until he complies? The exact same sentence as when he would have been proven guilty of the crimes accused of, but without proof? I’m not sure something like that could ever stick under a traditional interpretation of the rule of law.

The way I see it, nothing prevents a company to knowingly and willingly backdoor a protocol, algorithm, application, device or service if it so choses. But which should come publicly advertised with an explicite “NSA inside” label, forfeiting the right to call the product “secure” (or whatever term agreed upon) in any way and with stiff penalties and punitive damages for all affected users when lying about it, whether the product is commercial or open source. Same thing if a company voluntarily chooses to cooperate with IC/LE (AT&T et al).

On the other hand, any company should also be free to offer unbreakable crypto without government mandated backdoors, and for all the reasons previously discussed.

I can however see exceptional circumstances in which the government could ask a company to provide assistance in unlocking or deciphering a device, conditions and methods of which need to be framed in a new piece of legislation that reflects the world of today.

1) An adversarial procedure with three parties: the government, the company and a privacy & civil liberties representative. No secret ex parte decisions with a gag order from secret courts under secret interpretations of the law.

2) Single warrants for single cases.

3) A high burden of proof on the government to show a clear and imminent danger with a formidable impact on national security and the public at large, and which all parties are read into under NDA. With which I mean a “24” scenario, not some local magistrate in Kalamazoo,MI demanding to unlock an iPhone in a divorce case.

4) No unreasonable burden on the target company, as in fully and permanently re-engineering the product, and without impacting the security of other users of the product. How far a company wants to go here will of course also depend on the severity of the case presented by the government.

All of which means that precious few cases will proceed, and that the government as in the past will have to keep on focusing on other methods and only in exceptional circumstances can resort to this type of solution. To me, that’s the only acceptable balance.

Dirk Praet March 17, 2016 9:55 PM

@ Rolf Weber

The “backdoor” would be secured with strong crypto. If you deny that this is possible with crypto, you deny that crypto can secure you in the first place.

There’s no such thing as a bulletproof backdoor, Rolf. With or without crypto. Really. I suppose you totally missed the Juniper story in which an NSA Dual_EC_DRBG backdoor was apparently successfully subverted by a third party?

The obscurity is that WhatsApp says that they cannot read the messages, while it is true that they easily can

Would you please be so kind as to use the correct terminology as not to confuse people? In security engineering, security through obscurity is the use of secrecy of the design or implementation to provide security, not spreading lies about it.

The word you’re looking for is deception, but which is not the case because under normal WhatsApp MO, they can’t. They only could if they were to develop a MITM exploit. Which you have no way of proving already exists.

It’s a fine example of twisted logic that is the equivalent of calling an illiterate person a liar for claiming he can’t read under the argumentation that he could easily learn to.

Second, since they cannot compel, they had to ask for legal help from th U.S. government. It wouldn’t be up to Apple or WhatsApp to decide.

That’s not the way it works, Rolf. A company operating in a particular country falls under that country’s jurisdiction unless there is an explicit statute that it doesn’t. In 2013, Blackberry bent over to the Indian government after a four-year standoff. End of last year, they decided to leave Pakistan when that government asked the same. Google at some point pulled out of China.

I assume you mean the same “experts” that still believe in and repeat the Snowden fairy tales?

You’re starting to obsess over Snowden again. We’re talking about something completely different.

If such a service will ever attrackt a significant number of customers, it will for sure attrackt the government of this foreign jurisdiction as well

Without there being a certainty that that country will legislate mandatory backdoors too. And if at some point it does, then everybody moves on again.

But exactly this is one reason why it will never get hundreds of millions of customers, because 99% of the users don’t appreciate being annoyed with keys.

So what? Their target audience are folks that are sufficiently concerned with the privacy and the security of their correspondence that they will gladly put up with a minor nuisance that’s actually there to protect them. At the local pub over here, pretty much everyone has ditched Skype, WhatsApp, Telegram, iMessage and the like in favour of Signal, and I still have to hear the first complaint about it being user-unfriendly.

Same thing for Proton. And by using a password manager, you’re making sure not to lose any. It’s really not very different from driving a car safely. You can either choose to do so by observing a couple of annoying rules like not driving under the influence or to suffer the consequences if you don’t. And eventually falling in line anyway after a couple of bad experiences.

Rolf Weber March 18, 2016 3:24 AM

@Thoth


What are your schemes and proof of concepts for secure backdoors. They need to include features to stave off request from China, Russia and the other Governments except those you want to provide NOBUS access. Do you have codes or hardware samples for us to see and believe that secure backdoor with NOBUS access can be provided ?

This is pretty easy and straight-forward. Please keep in mind that Clive and Dirk are discussing 3rd party backdoors, with which law enforcement or intelligence agencies could get a “direct” access. I don’t say that these kind of backdoors are impossible to implement, but much harder than what I propose. And further more, I don’t want such backdoors, so there is little need for me to discuss them.

I want “backdoors” demanding that manufacturers or service providers should be able to decrypt their own encryption. And that would be quite easy to implement:

Regarding smartphone encryption, the basic idea is that the manufacturer encrypts the user’s PIN (or the filesystem key) with his public key and stores it on the device. The private key can be a hardware key, impossible to extract, so that it cannot be stolen undetected and the unlocking can only be perfomed on the premises of the manufacturer.

Regarding messengers, the clients should be patched so that they never alert when the key of the service provider is presented, making a MITM always possible, but only with the service provider’s key. (Note that regarding the current WhatsApp case, even this wouldn’t be necessary, because the current WhatsApp client never alerts on key changes)

I hope the companies will become reasonable again and do this steps voluntarily, otherwise I hope Congress or courts will force them to do.

Thoth March 18, 2016 4:54 AM

@Rolf Weber
re: Un-extractable private keys

“Regarding smartphone encryption, the basic idea is that the manufacturer encrypts the user’s PIN (or the filesystem key) with his public key and stores it on the device. The private key can be a hardware key, impossible to extract, so that it cannot be stolen undetected and the unlocking can only be perfomed on the premises of the manufacturer.”

The history of HSMs and smartcards have shown us that it takes moderate effort to extract. Independent labs like IOActive have a history of reverse engineering chips with modest expenses (~ USD$ 10,000 funding). If you are going to put a hardware private key on every chip with the same private key, all it requires is a few of these smartphone chip (~ USD$ 0.50 per piece ?) and then mass de-cap them. Because there are multiple samples with multiple same keys, you can cover each de-capped chip’s flaw by the other. This won’t cut it.

re: Sneaking patches for Messengers

“Regarding messengers, the clients should be patched so that they never alert when the key of the service provider is presented, making a MITM always possible, but only with the service provider’s key. (Note that regarding the current WhatsApp case, even this wouldn’t be necessary, because the current WhatsApp client never alerts on key changes)”

What if the application uses an Open Source + Open Standards like XMPP messengers with OTR. You don’t need the same clients as long as all the clients support XMPP/OTR combination. How would you force all Open Source Messengers to do your bidding beyond country jurisdiction ?

Not to forget, Android is an Open Platform for most of it’s codes and users are allowed to root and side-load applications. How are you going to control rooting and side-loading without someone figuring an alternative beyond country jurisdiction regarding laws on side-loading applications, rooting phones, creating own crypto applications with no backdoors ?

Another problem is users using hardware and software to implement their own home-made crypto machines and software to encrypt their messages in advance before sending them over possibly backdoored channels or transporting them on possibly backdoored devices. All you need is a cheaply mass produced MCU chip and program it with cryptographic functions coupled with a display and an input tied to the MCU chip or maybe to write a Python crypto script to perform your crypto on an OpenBSD machine before manually copying the previously encrypted messages. How do we prevent such things from slipping past the “dragnet” ?

Would it require another round of Crypto, Security and IT Import/Export Control to ensure lousy crypto and security exist and no higher assurance crypto and security survive the import/export control on the UN level ?

Maybe every chip should be checked to ensure it cannot be programmed for computing cryptographic algorithms and their building blocks (XOR, AND, BigInteger maths …) ?

Your approaches have a lot of holes to cover.

Thoth March 18, 2016 5:12 AM

@Rolf Weber
re: Un-extractable private keys

TO add additional details on extracting secrets from hardware protection, it is assumed that the smartphone chip contains a security processor of sorts that sorts secrets (manufacturer keys) in the embedded Non-Volatile Memory (NVMe) and due to the fact that implementing a full fledge high security HSM is expensive, thus the assumption that the hardware of the smartphone chip has the equivalent of a smartcard security chip. This is essentially a downgraded version of hardware security due to the lack of active security monitoring and response a high grade HSM provides (including when power is deliberately removed to attempt tampering). Such low grade hardware security measures are shown to be flawed in various researches that attempt to break security of smartcards and HSMs.

Rolf March 18, 2016 5:47 AM

@Thoth

Re smartphones, my suggestion is not one HSM per smartphone, it is one “master key” HSM, located at Apple’s premises.

Re messengers, I already said I don’t want to cover all messengers. If the criminal uses Signal, then bad luck for FBI. I just want to cover big players like WhatsApp or iMessage, who act under the jurisdiction of western democracies.

Dirk Praet March 18, 2016 8:50 AM

@ Rolf Weber, @Thoth

I already said I don’t want to cover all messengers. If the criminal uses Signal, then bad luck for FBI. I just want to cover big players like WhatsApp or iMessage, who act under the jurisdiction of western democracies.

I doubt such an approach would fly from a legal point of view. Most western democracies have firmly enshrined into their constitutions the concept of legal equality, i.e. that everyone must be treated equally under the law regardless of race, gender, national origin, color, ethnicity, religion, disability, or other characteristics, without privilege, discrimination or bias.

In the US, SCOTUS has repeatedly upheld that certain constitutional rights also protect legal persons such as corporations. It is thus hardly thinkable that mandated backdoors could only be imposed on a select group of companies depending on their size or number of users while others offering similar products would be exempt. In addition, commercial law provides more specific state and federal statutes governing unfair competition and disadvantages. Bottom line: mandated backdoors either apply to all or to none.

Which leaves us with the international angle. You may argue that 99% of private persons don’t give a rat’s *ss about their privacy and that mandated backdoors will only cause a minority of them to ditch Apple or other US products. That’s probably true, but it will definitely affect purchases by foreign public sector and business entities where confidentiality of data and communications is of the essence. And those do represent a huge part of their sales revenues.

Do you really think anyone doubts for a second that it’s just a matter of time before the NSA, in its continuing mission to harvest foreign intelligence, goes after your government mandated and company implemented backdoors too and starts exploiting them? What country or company can possibly be looking forward to that?

Comey’s crusade is not going down well with the US tech industry. Lost revenues are probably the main reason Apple is now vehemently fighting the FBI’s court order and why they’re being supported by other tech behemoths.

In short: you and Comey are in an increasingly isolated position. Even the US IC is not backing him up. You’re taking a purely ideological stance, benefits and necessity of which at this time are questionable at best, and in the process completely ignoring or downplaying the very real and huge political, technical, economical and security implications thereof.

herman March 18, 2016 12:17 PM

Black Berry was killed this way. When countries around the world forced BB to hand over their keys, everybody dropped their BB phones like hot potatoes and bought new ones (that were equally insecure).

Skeptical March 18, 2016 3:21 PM

@Nick:

That’s not necessarily the case. There’s what they’ll do on their own and other things (or limits) they’ll do based on what other countries are doing. It’s negotiation leverage at play.

Oh, I’ll happily amend my statement to this: most countries are not waiting to see what the US requires before making similar demands. This applies especially to closed and authoritarian societies, where the governments are deeply concerned with controlling information. For them, free press poses an existential threat (literally), and so their motivation to examine software products, or require concessions as a condition to entering their market, is extremely high.

As to what the PRC, or Russia, or any number of other countries require, a company does have to weigh the likelihood of IP theft (the PRC, for example, requires substantial partnership with PRC entities – often state owned enterprises – as a prerequisite to doing business there) against the benefit of access to the Chinese market.

And indeed, as it became clearer that the PRC was intent on using foreign investment to acquire technology for themselves and build their own industry, many foreign companies have soured on certain types of ventures in the PRC. I’d expect that trend to continue.

Russia, temporarily at least, pulled back from certain demands for fear that its access to foreign technology might become more restricted – then again, it’s also a smaller potential market and carries other risk factors that are present in lower magnitudes in the PRC, and so has less leverage than the PRC regardless. Although, that said, Russia – which was terrified of the implications of falling behind in information technology during the last decades of the Cold War – also produces some brilliant research and development and very high quality creations in various areas.

Because of the US Government’s strong stance against commercial espionage, however, source code inspection by it actually presents almost no economic risk. Obviously, if the inspection can be done on a confidential basis, that risk is further minimized as one can avoid altogether the possibility that it might encourage other nations to ask the same.

Sancho_P March 18, 2016 6:24 PM

It’s funny that people who ignore the technical infeasibility of adding secure backdoors to insecure systems (aka adding a “secure” door to a not existing wall)
also propose solutions that likewise ignore the primary, very basic non-technical issues of secret remote access in a globalized world.

Shifting the load to a (national) manufacturer / provider doesn’t solve any of these problems:

Trust (or, not only internationally: lack of).
Transparency.
Reproducibility.
Liability.

The only answer is: Absolutely untouchable devices / procedures.
Thanks, Mr. Comey, to bring this to our attention!
Ed Snowden alone didn’t suffice.

Sancho_P March 18, 2016 6:27 PM

@Skeptical, re your:

”Oh, I’ll happily amend my statement to this: most countries are not waiting to see what the US requires before making similar demands. This applies especially to closed and authoritarian societies, where the governments are deeply concerned with controlling information. For them, free press poses an existential threat (literally), and so their motivation to examine software products, or require concessions as a condition to entering their market, is extremely high.”

Yes, as we understand, the US is eager to lead this downhill race.

Skeptical March 18, 2016 8:00 PM

@Dirk:

Either a device or particular piece of software is secure for everyone, or it isn’t. NOBUS has been debunked time and time and again, so yes, can we please for once be slightly realistic about these things?

Okay. Every means of non-designed access to a device or to the plaintext of a communication is universally available and equally accessible to all.

Or… not. Perhaps some distinctions are in order.

Obviously the slogan becomes inapplicable if we start discussing designed access mechanisms to a device rather than vulnerabilities and exploits.

I can only reiterate my opinion that the exact conditions, methods and modus operandi for government subversion of contemporary electronic devices and communications needs to be thoroughly discussed in Congress and then voted upon instead of the executive branch seizing such unprecedented powers under a 200 year old statute. And which is also what Apple and most of the rest of the tech sector are asking.

Except that’s NOT what this case is about. This isn’t a question about whether we’re going to mandate a lawful access mechanism into mobile devices.

Instead it is very much a judicial one: is the assistance required of Apple by the court order issued under the All Writs Act reasonable, necessary, and not inconsistent with other laws.

Note that if the Department of Justice did not have sufficient specificity to detail what they wanted to do, and how, then this would be a much murkier case and the DoJ would likely lose. It’s the fact that this vulnerability is rather clear, and the implementation apparently reasonable clear as well, that brings us into reasonable assistance territory.

Such an approach, in view of the potentially huge consequences of what the FBI is asking for,

There are none. The nearly apocalytptic rhetoric emanating from some corners is borderline hilarious.

If the government gets what it wants and the new piece of legislation passes constitutional scrutiny, then so be it and many of us will just move to non-US products.

What new piece of legislation?

Rolf March 19, 2016 3:17 AM

@Dirk Praet

A law basically saying: “communication service providers are required to be able to circumvent implemented encryption in order to allow lawful interception” is equal for all, even if it is not enforcable against some (located abroad).

Regarding your other points, we are in the legal and political discussion whether “backdoors” are a good idea, or if they more hurt than help. And that’s where the discussion belongs. And I don’t say your arguments are poor or wrong here. Not at all, maybe your legal and political arguments are better than mine.

But — and this is my main point — you don’t have technology on your side. If companies decide to voluntarily implement “backdoors”, or if they are required by courts or lawmakers, they can implement it securely.

@Skeptical


What new piece of legislation?

I think that currently the U.S. government is testing how far they can go with existing law (All Writs Act regarding smartphone encryption, CALEA regarding E2E encryption). But they can lose one or both cases. And even if they win, there will remain drawbacks: smartphones are still unlockable if the user had a strong passphrase, and WhatsApp could easily change its client in order to alert key changes. So I believe too that at least on the long run, legislation is needed.

Clive Robinson March 19, 2016 5:07 AM

@ Rolf Weber,

If companies decide to voluntarily implement “backdoors”, or if they are required by courts or lawmakers, they can implement it securely.

You are still not getting it.

An encryption algorithm like AES may be judged as secure. But in practical usage it’s security can be easily compromised by a defective implementation or deficient key selection process.

All backdoors we know of use either defective implementation or predictable key usage. Thus they are not secure.

Do you get it now?

Rolf Weber March 19, 2016 6:06 AM

@Clive Robinson

If you are arguing with possible implementation flaws, than I suggest you stop relying on this unsafe encryption at all, move in a bunker and cut the wire.

Bugs and mistakes are always possible. They are found and closed, making the system more and more secure. That’s how security research works. Your point is not at all a valid argument.

But if you ask for examples: The “backdoor” Apple had on older iPhones was, AFAIK, never compromised or misused (even though I assume it was a poor “backdoor”).

Another example is the Skype “backdoor”. AFAIK never compromised.

Figureitout March 19, 2016 9:26 AM

Rolf Weber
Your point is not at all a valid argument.
–Actually it’s painfully valid, especially in security, which you claim to be a technical practictioner. The implementation not only has to be correct, but to resist external change via additional checks and adding in delays so attackers can be frozen in a spot and captured in due time or the attack shutdown. People that actually have the responsibility of securing valuable assets, know they’ve been burned enough times by their own stupid mistakes, probably have logs showing someone’s gotten thru and gotten out, and they didn’t stop it, that adding in more holes intentionally reflects poorly on them, and they could get easily thrown under the bus for a bullsh*t reason like deliberate backdoors. I think you’re just too stubborn and stupid to accept you might be wrong, which is worse than trolling; another accusation I’m lobbing your way b/c you go against basically the entire security industry.

RE: the iphone backdoor being compromised
–AFAYK? Yes that’s good evidence enough for me, Rolf’s word everybody! He can monitor millions upon millions of devices scattered across the globe. Case closed.

No we need some of that technical evidence, “Mr. Technical” of a securely implemented backdoor, and how you’re logging access. Not more worthless repeating the same wrong statements. Snowden was right too by the way, all right (sorry couldn’t resist guys).

Dirk Praet March 19, 2016 12:17 PM

@ Rolf Weber

If companies decide to voluntarily implement “backdoors”, or if they are required by courts or lawmakers, they can implement it securely.

No they can’t, Rolf. But let’s assume for argument’s sake that they can.

Government A legislates mandatory backdoors which vendors start to implement. Subsequently, governments B to Z start doing the same thing and companies have the choice to either comply or back out of that jurisdiction. In no time, the entire backdoor management becomes a technical and logistical nightmare placing a completely unreasonable burden on any company.

Smaller outfits decide that all the fuss is not worth their trouble and dump their security related products and ideas, thus stifling innovation. Especially low-funded open source projects suffer because everyone can inspect their code and eventually die or become closed source. Security research needs to be heavily regulated or even banned because you can never tell when you have hit a genuine product vulnerability or a government mandated backdoor, tampering with which may end you up in jail.

The secure backdoors mandated by country A can now be used against them by the intelligence services of countries B to Z who have compelled companies from country A to deliver backdoors to them too. Crime syndicates go after the backdoors too and succeed in all kinds of banana republics where they have already deeply infiltrated the government. The public sector and security sensitive business entities in countries B to Z start dumping products from country A that in its turn does the same with products from countries B to Z that are backdoored too. The end result: a complete balkanisation of the internet and the tech industry with everybody less secure than before.

There really is a good reason why folks like @Skeptical are so adamant about the SB case being a stand-alone, one-off case that is perfectly covered by existing legislation. Contrary to you, he has thought this through and is very well aware of the potentially devastating blowback of universally mandated backdoors.

I believe I have sufficiently indulged in your claim that it would somehow be possible to create secure backdoors. Could you now please return the favour in contemplating if only for one minute that you are dead wrong and what the consequences thereof could be on top of the above?

Rolf Weber March 19, 2016 12:40 PM

Wait, @Figureitout, I’m not the absolutist here. I never claimed backdoors are without risks. And I even say that backdoors make the whole system potentially more insecure, just because it’s additional code and access, and thus potentially more bugs and mistakes. But you get something back for this additional risks: law enforcement and foreign intelligence. My point of view is that we need to weigh the risks against the gains, any here I argue that the “backdoors” are implementable with a very reasonable level of security. And I deny your absolutistic view that “backdoors” necessarily are vulnerabilities.

Let me explain on an example: administration of a firewall. You will agree that the most secure approach is when the administration is only possible from the console. But most companies will consider this too inconvinient and time-wasting for the admins and allow an oob net for remote access. And this is already something you could call a backdoor. But it’s considered reasonably secure enough, and the gains clearly outweigh the risks.
We could go on with it, a company who outsourced the firewall administration and where some restricted access from even outside the company needs to be allowed, an even bigger backdoor, that may nevertheless still pass the adequacy test.

Good security is never absolutistic or dogmatic. The skill is to know the risks and weigh it against the benefits. That’s what I do, and you don’t (or you do and see zero benefit in law enforcement and foreign intelligence — than your absolutistic view is somehow logical..).

And I think I described my proposals detailed enough. Maybe you read again. If something is unclear, just ask.

Rolf Weber March 19, 2016 12:50 PM

@Dirk Praet

Again, you argue against a strawman. I never demanded backdoors that are exploitable by agencies. I want laws that require smartphone manufacturers to be able to unlock their own devices, and service providers to circumvent their own encryption, so that both are able to respond to lawful requests. No more, no less.

Figureitout March 19, 2016 3:48 PM

Rolf Weber
–I think you are the absolutist, wanting to force companies to implement a backdoor b/c the gov’t can’t work out a way to do their jobs w/o compromising a product b/c they’re lazy and need some more competition in their job field (back when I had to deal w/ a hit-and-run on my parked car (got the f*cker thanks to my friend only and our subsequent personal investigations getting enough evidence to pass on, otherwise he would’ve gotten away), the private investigators in the insurance company got the job done much better than the cops who didn’t “GAF”, I’d rather dial their number than 911 next time).

As an aspiring engineer (I say I’m one in my email signature, but I’m still in school), I’d be pissed as hell if I had some agency tell me to muck up my design and my product, w/ a backdoor. Really pissed. They don’t mention in textbooks to compromise your design for lazy law enforcement and morally corrupt to the core intel agents, who can’t make due w/ the flood of info they already have access to. No instead we have an unenforced moral duty to be honest and above all make it safe for the user (this is serious, the guy that designed that bridge that wobbled spectacularly and failed committed suicide from the shame and more likely the lack of job prospects…). In science and engineering, being “absolutist” (a misleading term for it, more like “correct”) is required many times, as, so far as we can observe (repeatedly), document, and study further. Take the Periodic table of elements, pure gold is not pure hydrogen, they are absolutely chemically different. Go hold the 2 wires of an AC outlet w/ both hands and create a circuit in your body if you don’t absolutely believe you’ll get electrocuted; maybe you can legally force a backdoor around electrocution and see how that works out for you.

In security, once someone has conducted sufficient recon to find patterns in your schedule, and thus more weak points when s/he can try further breaching relatively risk-free, as well as making sure someone doesn’t revamp their system. That knowledge can then be sold or given away for free. So many tools exist which allow this w/ any internet access. Once they’re in, they’re in. And crypto protections begin to fall apart when you can keylog and screen capture.

Yeah, like I told Dirk Praet w/ his tails candy, I didn’t like team viewer b/c it’s a remote way in your system. I don’t want it, IT staff can come in person, not remotely access the PC. I keep them off my work PC mostly, and the owner didn’t want me bringing in my own PC anymore (I have my suspicions about that request…I run live, wired internet only (I physically removed wifi antenna, and wifi/BT module)) so now I risk compromising our builds which is very nerve-wracking for me.

I’ll take my absolute security over your backdoored one w/ lazy people who need some fear of losing their job, thank you. And yeah your proposals were highly lacking in any technical details of how to implement a secure backdoor, so I won’t ask you again. Best to avoid unsubstantiated proposals and legal decrees based on ignorance that will fail and lead to more fail.

Dirk Praet March 19, 2016 5:56 PM

@ Rolf Weber

I want laws that require smartphone manufacturers to be able to unlock their own devices, and service providers to circumvent their own encryption, so that both are able to respond to lawful requests

Which can only be achieved by mandated backdoors, either already present in the form of exploitable vulnerabilities or newly created at the government’s request. And at which time the scenario unfolds that I have described in my previous post, but which you obstinately refuse to even consider because it does not fit your view of the world.

@ Figureitout

Yeah, like I told Dirk Praet w/ his tails candy, I didn’t like team viewer b/c it’s a remote way in your system. I don’t want it…

Err, the Teamviewer in my TAILS Candy was only meant to be used as a client, not as a server.

Sancho_P March 19, 2016 7:17 PM

@Skeptical

To see FBI / Apple as an isolated judicial case would be OK for a judicial marionette.
But you shouldn’t try to downplay this court order to an isolated case without any precedent, as the FBI upfront decided to go public (contrary to your dishonest
deception attempt).

However, even what seems to be a simple, isolated case can’t be handled by marionettes.
Men made laws have a context and history.
This is why human-made laws require human judges.

I’m having high confidence in the common sense of human judges (as recently seen in EU High Court), probably as the last chance to stop power, greed and stupidity.

Sancho_P March 19, 2016 7:20 PM

@Rolf Weber

”… they can implement it securely.”

Nope. Not talking solely about bugs / vulnerability, it’s the design not to be secure.
We do not have secure hardware, secure processes (development, transmission), secure authentication and secure implementation.

Nothing to hinge a secure backdoor on, the whole building isn’t secure.
End of game for the word “secure” in context with backdoor.

”I want laws that require smartphone manufacturers to be able to unlock their own devices, and service providers to circumvent their own encryption …”

Doesn’t make sense, your are shifting the burden from (national) authorities to (national) companies, multiplying the attack surface.

Rolf Weber March 20, 2016 2:25 AM

@Sancho_P


I’m having high confidence in the common sense of human judges (as recently seen in EU High Court), probably as the last chance to stop power, greed and stupidity.

Great, me too, I also have this high confidence.
But I guess you have it, at least in parts, because you (like so many others) still misinterpret the ECJ’s SafeHarbor decision. SafeHarbor was only invalidated because the EU commission blew it. Now with PrivacyShield they did their homework, they carefully evaluated U.S. laws and practices, and came to the conclusion the U.S. are “essentially equivavalent”. And that’s basically all the ECJ wanted to hear. PrivacyShield will stand.


We do not have secure hardware, secure processes (development, transmission), secure authentication and secure implementation.

But why do you live with this “insecurities” with all other crypto implementations?

Only an absolutist will demand 100% security. I will not explain that again. Just tell you that you are a dreamdancer if you think that judges and lawmakers are absolutists too.


Doesn’t make sense, your are shifting the burden from (national) authorities to (national) companies, multiplying the attack surface.

No, I just take advantage of the fact that “backdoors” are much easier to design and implement if all that’s required is that smartphone manufacturers and service providers must be able to circumvent their own “security”. Being so easy to design and implement, the remaining risks (because nothing is 100%) for my proposals are very, very low.

And here it is important to understand that currently Apple and WhatsApp are still able to circumvent their “security”, because it completely relies on obscurity, lies and promises.

Z.Lozinski March 20, 2016 6:02 AM

Again, you argue against a strawman. I never demanded backdoors that are exploitable by agencies. I want laws that require smartphone manufacturers to be able to unlock their own devices, and service providers to circumvent their own encryption, so that both are able to respond to lawful requests. No more, no less.

We have a data point that a facility, similar to the one you request, designed and implemented by the largest telecom vendor in the world, with substantial engineering experience, was compromised leading to significant financial loss.

What concerns some of us, is that such a facility will be compromised, and then exploited by well-funded criminals. (The sort of people who develop methods to steal USD 1 billion from the Bangladeshi central bank account at the US Federal Reserve).

In case you don’t know or recognise the similar situation from the past:

In the 1990s, the CALEA legislation in the USA mandated telecom equipment manufacturers provide the technical ability to respond to lawful requests in new digital switches.

In 2004-2005, the Lawful Intercept subsystem of the Vodafone network in Greece was compromised. An unknown intruder was able to use the Vodafone network to intercept the Greek Prime Minister’s phone along with those of the Cabinet. The best account of the case is “The Athens Affair”, IEEE Spectrum, Vassilis Prevelakis, Diomidis Spinellis, 29 Jun 2007.

http://spectrum.ieee.org/telecom/security/the-athens-affair

We can also quantify the economic cost of the episode. EETT (the Hellenic Telecommunications and Post Commission) fined Vodafone (the network operator) between EUR 69.7 and EUR 95.1, and fined Ericsson (the switch manufacturer) USD 10M. (The legal appeals on the amount of the fines are still on-going as of Dec 2015). The numbers are from Vodafone’s Transparency report.

Under the latest draft European Privacy Directive, the maximum liability in this type of breach are increased. Again, this is an economic incentive not to implement such a facility, as a compromise could cost EUR billions.

Gerard van Vooren March 20, 2016 8:10 AM

@ Rolf Weber,

“Don’t drink and write.”

Hmmm, I’ve written some great stuff while being drunk. It’s pressing the send button when you have to be careful. On the other hand, when you’re drunk you are telling the truth more likely. This can be a great starting point for a discussion. I do agree however that continuing a discussion isn’t that great while being drunk.

So, as with most things, it isn’t black and white.

Figureitout March 20, 2016 8:11 AM

Dirt Praet
–So it’s not possible in any way to subvert client mode of it or make a bunch of outbound connections? According to this thread, anything on the clipboard of even the client (long annoying to type passwords, say) is accessible by the host if you don’t “click a box” (which means the feature’s still there).

http://security.stackexchange.com/questions/12152/how-secure-is-teamviewer-for-simple-remote-support

If it’s necessary for your livelihood, then it’s an acceptable risk. If it has nothing to do w/ it, and you’d rather IT staff come in person so you can make a quick judgment call of someone who could do serious damage to your network, I don’t want it anywhere near my machines.

Figureitout March 20, 2016 8:23 AM

Rolf Weber
Don’t drink and write.
–I was sober, I’m just passionate. Doesn’t change the fact that you offered up no technical way at all. It was comically lacking, and leads me to believe you’re just another desk jockey who doesn’t know what he’s talking about. Public policy people should listen to the professionals w/ the experience.

Clive Robinson March 20, 2016 9:56 AM

@ Rolf Weber,

Only an absolutist will demand 100% security.

You are obviously not listening still. As others have noted you can not be “a little bit pregnant” either you are or you are not.

Whilst you might argue security is a spectrum between 100% secure and 100% insecure, then argue as you do that you cannot be 100% secure, you fail to follow through that argument to 100% insecure.

If you have a backdoor in a system then you know that the system is 100% insecure, there is absolutly no question of this. No if’s, no but’s, no maybe’s it’s security is totally broken, the only question remaining then is to how many people.

However the thing about 100% security is unlike 100% insecurity, you can not know you are. It’s due to the obvious “trying to prove a negative” and the “unknown knowns”, “known unknowns” and “unknown unknowns” issue of classes of attack and instances of attack.

Further a system may have bugs, but these are not of necesity usable as vulnerabilities, likewise a system may have vulnerabilities but still not be insecure. That is you may have three layers of security each with a vulnerability, but unless an attacker can align those three layered vulnerabilities they can not get through the three layers. In this respect it’s like the wheels of a combination lock, it only opens when they are all aligned, if you can not get them aligned for some reason the lock remains giving the same level of security it did before an attack was started.

These are just some of the problems with your “cloud cuckoo land” view point. Which arises from the falicies of your basic “absolutist” position and views of governments and all their agents being 100% trustworthy, reliable, honest etc, when it’s fairly obvious to most that the opposite is the case. Not least of which is your trying every which way to distort the realities of honest whistle blowing into “criminal intent”.

As for “Don’t drink and write” can I urgue to “Do Think as you write”.

Rolf Weber March 20, 2016 10:44 AM

@Z.Lozinski

3 points regarding the “Athens Affair”:

First, I already said there is nothing 100% secure. There are bugs, mistakes, rogue employees and much more that can lead to compromises. But what’s your point? If you argue that Athens is proof that we should go without “backdoors”, is then Heartbleed proof that we should go without SSL?

Second, when I read your link, I highly doubt that the law enforcement “backdoor” enabled the attack. It just made it more convenient for the attackers. According to the report, the phone calls were unencrypted within Vodafone Greece’s backbone, and the attackers were highly sophisticated, had broad access to the internal network and likely inside help. So it is likely they could have gained access to routers and switches and intercepted there.
So it is very likely this attack would have been successful even without the “backdoor”.

Third, it is an economic insentive to keep your network safe. The mistakes and carelessness of Vodafone and others was very little related to the “backdoor”, if at all.

@Figureitout

Here again my proposals. What “technical details” are you exactly missing?

https://plus.google.com/+RolfWeber/posts/SYw6AD8xkK7
https://plus.google.com/+RolfWeber/posts/fPK3DyfYdNG

Sancho_P March 20, 2016 11:02 AM

@Rolf Weber

No, while SafeHarbor might be a good example of delaying stupidity (it takes a looong time to change the course of a huge steamer) it’s not that important (to me).
I was thinking about the data retention laws (BVerfG, later also the EU, and the new attempts in Stasiland).

Re “secure backdoor” my single point is: Omit the term “secure”, it’s disingenuous.

For further thinking, even the term “backdoor” is deceptive, it is not the door, it is the kind of access to that door that is important.
If you’re thinking about a one-for-all key solution so call it “general key for all front doors”, now people would know that there is absolutely no privacy to be expected.
Undetectable secret access: No need to invest in security locks / burglar alarm.

For the manufacturer and provider solution:
We don’t trust our own Gov / LE (because it’s opaque, corrupt, with built-in impunity, lacking any self-cleaning / optimizing / control mechanism) and we know we are at the shorter end of the lever.
So how could we trust a multinational company or their (outsourced) slaves from a different jurisdiction (culture)?

Now don’t think about your dick pics only, think of business data, proposals, IP, business contacts, possibly diplomatic relations, international contract details.
Having business with different companies in several countries I can’t agree to the idea of any general key, so probably silently losing delicate data to someone unknown, as e.g. OPM did.
You know that exposing distinct (personal) data sometimes may result in even fatal consequences, depending on their culture?
And they wouldn’t ask if it was done just carelessly [1].

It doesn’t matter if LEOs, spies (the good ones) or competitors,
a general key,
– one for all devices,
– or one key for all my data (hard drive, memory, files)
is simply a no go.

As is any secret access to my content without my knowledge.
I’m not a criminal, I just demand my personal security.
And I don’t have a scale for the unknown, btw.

[1]
Traveling to Sudan once the airport customs demanded my phone. I hadn’t to unlock it, they just took it to a nearby machine, discussed a little bit, then I got it back.
On the way to our hotel I found out it wasn’t mine, couldn’t unlock it.
My phone contained (encrypted) business data, also from Egypt.
Sudan and Egypt are not only friends (Nile water, part of our business).
I never got the phone back, they “couldn’t find” it any more.
However, our negotiations in Khartoum were not disrupted, we had a backup.

Stunt or mistake, I don’t know, but I’m glad all the data was (separately) truecrypted.

Sancho_P March 20, 2016 11:14 AM

@Rolf Weber, re Athens Affair

To me this is an example that we must try to keep spies out, not only the ordinary criminals.
If you include a loophole the lawful terrorists (e.g. CIA) will use it.

Niko March 20, 2016 3:58 PM

@Dirk
Re:
Can you please explain to me how you will handle government requests from, say Saudi Arabia, Turkey, China, Syria and the like when they demand access to a known backdoored product or service? As long as there are no backdoors whatsoever, at least a company can put up a credible legal defense, but which goes entirely out the door once they are there.

Any company operating under their jurisdiction could not but comply or discontinue their operation. What are you going to tell either those companies or the targets of such warrants? That it’s just tough luck that they need to either comply with “lawful” government demands and put peoples lives at stake or forfeit their revenues? And to activists that they just chose the wrong country to oppose the government?

This would be handled exactly as is it is today. China, Saudi Arabia, and Turkey can certainly ask for a backdoor version of a product today for the Chinese/Saudi Arabia/Turkey domestic markets. You have to have a very US-centric world view to think they all wait on the US to make demands. Syria might be in a case by itself, given the various sanctions in effect and genocide. Stewart Baker, https://www.washingtonpost.com/news/volokh-conspiracy/wp/2016/02/25/deposing-tim-cook/ , has some very interesting speculation regarding Apple, China, and WAPI. If Apple wants to sell iPhones/iPads/whatever, in China, it has to sell products in China compatible with Chinese lawful intercept laws. The idea that not providing a backdoor to the US is a credible legal defense for not providing a backdoor to China in a Chinese court is sheer nonsense.

Rolf Weber March 20, 2016 4:01 PM

@Clive Robinson


If you have a backdoor in a system then you know that the system is 100% insecure, there is absolutly no question of this.

This is, sorry, simply technical bullshit. When I open an ssh access to my server I am not “100% insecure”. The security still depends on the password strength, access lists, the sshd software security, and much more. Your absolutist views have nothing to do with reality.

And regarding the link with the letter to Obama, it claims the same absolutist nonsense. Of course math is absolute, and nobody can negotiate math. But the discussion about “backdoors” is not about the math itself, it’s about how to use the math.

@Sancho_P


If you’re thinking about a one-for-all key solution so call it “general key for all front doors”, now people would know that there is absolutely no privacy to be expected.

Nobody demands a “general key for all front doors”. I say that smartphone manufacturers should be able to unlock their own phones, if the following conditions are met:
– the user runs original software and uses the manufacturer’s default tools
– the request to unlock is lawful
– the manufacturer is in physical possession of the phone
– the unlocking is performed on the manufacturer’s premises

No more, no less. This could be implemented reasonably secure, and it would by far not jeopardize everyone’s privacy.


Stunt or mistake, I don’t know, but I’m glad all the data was (separately) truecrypted.

Good, and that’s what I say: Those who need some “extra security” should never solely rely on the manufacurer’s default settings, but take their own measures. And nobody aims to outlaw or weaken individual security measures by users. This is a strawman.

Sancho_P March 20, 2016 6:36 PM

@Niko, re Apple backdoor in the US

Let’s skip that “backdoor or not” discussion and get straight to your point:
”The idea of not providing a backdoor to the US is a credible legal defense for not providing a backdoor to China in a Chinese court is sheer nonsense”
and turn it around:

Apple, after being compelled in the US, their democratic homeland, would have absolutely no stand to reject such a request from China.

However,
winning this ridicules case in the US they would never have to produce this access for any other nation, because the whole world would cry foul at the requesting government, Apples popularity would soar and compensate any potential loss from that requesting country.
Believe it or not, even North Korea wouldn’t request that access, they aren’t stupid.

Yes, let’s weaken the ethical standards in the US to promote worldwide fascism
(probably you don’t know about Francisco Franco, but I have some memories, even when he was under the protection of UN and NATO).

But I agree, Tim Cook should be pressed to answer Stewart Baker’s questions, too.
And react.

Figureitout March 20, 2016 6:37 PM

Rolf Weber
–Basically all of them. What ciphers used, what toolchain to build, how to build, where’s the code, what chip(s) does it run on, etc…It’s just you talking out the ass w/ big claims, not an implementation. Put it together in a tangible system (preferably a modern smartphone) people can test your backdoored system no one will want to use. I guess a possible good use is having a backdoor for yourself for recovery purposes, but still if someone finds it they can get in and out w/o leaving that digital evidence…that’s a risk people should have the choice to decide for themselves, not be forced by authoritarian-like lazy law enforcement.

Your first one, you for some reason can’t compute moving encoding/encryption off the device, and continue to state a falsehood that a mitm attack will work on that content and is the only way. You then gloss over how the actual interception would happen.

Your second one was better but lacking an actual implementation again which we can test is actually technically sound…you kind of need that if you’re going to make the case it’s possible to have a backdoored security system, that is “secure”. And I didn’t see any risk analysis of potential attacks made possible only b/c of the backdoor (some employees at company can now get into any phone brought in etc., that’s a big risk), but not having the backdoor keys makes the system more secure, and this is not debatable. Serious customers aware of the risks and how we struggle to keep things secure at the best of times, w/ valuable assets to protect, want the more secure system.

Sancho_P March 20, 2016 6:46 PM

@Rolf Weber

So “nobody demands a general key for all front doors” – how come?
But you want them to create that key and also delegate the responsibility for loss/abuse to them, if I understand correctly?
Strange.

Your points are interesting, though:

  • The first, if allowed, will render that function void in many (namely the interesting) cases.
  • The second is impractical, who knows which law is applicable for the phone of an unidentified terrorist, ID-less refugee (sorry for this context), someone with more than one citizenship, a phone bought in country A but acquired in country B, bought by a friendly government, stolen from …, …,
    and requires trust in the requesting judicial system (which I wouldn’t even have for the US, imagine the three letter agencies are involved – btw. you didn’t specify the kind of crime or LEO … Tons of work for the manufacturer, horrible).
  • The third one sounds interesting at first but is flaky, see my example and think of CIA-trained people:
    They probably have stolen my device and then provide a lawful request under the name of, say, Bruce Sancho Sanchez, a terrorist, just to get to my data and their opponents.
    They first would ship the device to any point on earth.
    No, not worth a dime in real world.
  • The fourth one is good, at least no secret remote access.
    Just you forgot to specify the location of the premises (so we would know whom to bribe for exceptional access).

Sorry, your proposal doesn’t convince me.

Dirk Praet March 20, 2016 8:05 PM

@ Figureitout

So it’s not possible in any way to subvert client mode of it or make a bunch of outbound connections?

It probably is if you work long enough on it, but even getting the client to work already is a bit of a challenge in that TAILS is very restrictive about network connections, so there first are a couple of tweaks to make. But as I already said before, putting TeamViewer in TAILS Candy was more of an experiment rather than a persistent feature meant to be permanently installed and enabled.

Don’t drink and write

You don’t have to explain yourself to someone who is rapidly building a reputation for rude and offensive comments against anyone who doesn’t agree with his opinions.

not Dirt…innocent mistake

That’s ok. You owe me a Jack Daniel’s next time you’re in Belgium. I’d love to discuss your recent work with you 😎

@ Rolf Weber

But why do you live with this “insecurities” with all other crypto implementations?

We try to secure our data and communications with crypto because it’s one of the best tools in the arsenal. But nobody, not even our host who has some relevant experience in the field, will ever claim it’s bulletproof. Algorithms and implementations may be flawed. Technological or mathematical advances may suddenly render previously solid crypto vulnerable to practical attacks. In practice, not a week goes by without some new research, vulnerability or exploit making headlines and being discussed on this blog. Since there is no such thing as a 100% secure algorithm or implementation, neither is there a secure backdoor. Why the heck is that not sinking in with you?

But back to your question: vendors and users accept the risk of doing crypto because the risks associated with not doing so are much bigger. And we try to mitigate our risk (= Asset + Threat + Vulnerability) by making algorithms and applications as strong as humanly and technically possible. Any which way you turn it, your government mandated backdoors – whether secure or insecure – will significantly impact any crypto related risk analysis.

If insecure, they either preserve existing or create new vulnerabilities making everybody less secure, in the process imposing on vendors an unreasonable burden of backdoor management and exposing them to potentially huge liabilities in case of a breach. If somehow by an act of $DEITY (or sheer luck) they would be secure, the scenario I have described in one of my previous posts may well ensue. Short: from a risk management perspective, nobody in his right mind can possibly support what you and Comey are proposing.

Being so easy to design and implement, the remaining risks (because nothing is 100%) for my proposals are very, very low.

No they aren’t, Rolf. Pick up a book about security design and risk management some time. While you’re at it, start with answering my balkanisation scenario instead of calling it a straw man argument. It’s a potential outcome of what you’re proposing, not a refuting of an argument you didn’t make. But we had already previously established that your understanding of certain common definitions and concepts is not always entirely spot-on.

And since you decided to venture into the domain of logical fallacies, do allow me to point out some of yours.

If you argue that Athens is proof that we should go without “backdoors”, is then Heartbleed proof that we should go without SSL?

Ever heard of a non sequitur argument, Rolf? A fine example of a propositional fallacy leading to the either disingenuous or downright stupid conclusion that the product should be dumped instead of the vulnerabilities closed.

I highly doubt that the law enforcement “backdoor” enabled the attack. It just made it more convenient for the attackers.

Conjecture, Rolf, or an interesting case of an informal fallacy known as “argumentum ex silentio”, where a claim is based on the absence of, rather than the existence of evidence. And what do you not understand about the fact that rogue software was using the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones? You’re just showing off again that you refuse to take anything into consideration that does not fit your narrative.

The security still depends on the password strength, access lists, the sshd software security, and much more.

No, it doesn’t. Once a backdoor is installed, none of that matters any more. That’s the entire purpose of a backdoor, but you’re just not getting that. And, by the way, you really should switch to key instead of password based authentication when using ssh. That’s what most technical security practitioners with half a clue do.

Dismissing a claim as absurd (“technical bullshit”) without demonstrating (proper) proof for its absurdity is an informal fallacy known as the appeal to the stone or “argumentum ad lapidem”. Your “don’t drink and write” comment at @Figureitout was a nice example of the “argumentum ad hominem”, or evading a reply by conducting a personal attack instead.

I say that smartphone manufacturers should be able to unlock their own phones, if the following conditions are met:

So you keep repeating, Rolf, and so we keep refuting without you listening. Reiterating a position time and time and again does not make it a better argument. It’s actually yet another informal fallacy called “argumentum ad infinitum” and which consists in repeating a previously discussed opinion until nobody cares to go against it anymore.

@ Niko

The idea that not providing a backdoor to the US is a credible legal defense for not providing a backdoor to China in a Chinese court is sheer nonsense.

That’s not what I said. While the absence of a backdoor will certainly not prevent any government from demanding one (e.g. the Blackberry case I quoted), the known presence thereof will make it considerably harder for US companies to either fight or stall such requests in court because neither unreasonable burden or domestic legal restrictions can be used anymore. Wanna bet how this plays out if ever the US legislates mandatory backdoors? It’s gonna be a backdoor bonanza even backroom diplomacy won’t be able to contain anymore.

@ Sancho_P

The fourth one is good, at least no secret remote access.

As in Animal Farm’s Seven Commandments, it will be the first limitation to be lifted as soon as both domestic and international requests will start pouring in and Apple (or vendor X) cannot cope with them any longer in a for LE acceptable time frame.

Figureitout March 20, 2016 9:08 PM

Dirk Praet RE: teamviewer
–Ok, understood. I have some bad history w/ the software so I’m biased.

You don’t have to explain yourself
–Thanks mate, I think I let my emotions go again and got a little rude. But goddamn, the arguments don’t make sense! You know Rolf is going to keep on spewing falsehoods and ignoring facts pushed in his face so new uninformed people interested in security and maybe other senior people looking for guidance are going to get introduced to falsehoods and give them a chance they don’t deserve. This is a case where technical people need to get out of their shell and speak up, b/c otherwise ignorant people push you around and take advantage of you. They literally want us to backdoor our products now! What the hell.

You owe me a Jack Daniel’s
–I’d buy you 2 at least mate, hopefully you rode your bike to the bar. I want a Jupiler again too (don’t know why they don’t sell more in US). But yeah I have a couple interesting things, but I can’t say b/c I can’t be the one giving out IP that matters (skilled attackers could plow thru the company’s defenses anyway and get it, based on my observations). What’s funny is the underlying principle was known by amateur radio people back in the 1910’s or maybe even earlier but was undocumented. So many things, people already knew it w/ much less tools and access to knowledge! I was blown away by it over a hundred years later…

Rolf Weber March 21, 2016 6:55 AM

@Sancho_P


However, winning this ridicules case in the US they would never have to produce this access for any other nation, because the whole world would cry foul at the requesting government

I cannot believe that you are that naive. Since when do autoritarian regimes care about what the rest of the world cries foul?
Again: The only reason that can prevent this regimes from trying to push companies like Apple or WhatsApp is that they cannot really compel these companies, because what’s needed can only be enforced at their headquarters, and they are located in the U.S. They only can try to bully them with sale prohibitions or blocking services or other things, but that likely hurts them too.


Your points are interesting, though:

– The first, if allowed, will render that function void in many (namely the interesting) cases.

Then it be so. With this point I wanted to make clear that Apple can only be hold accountable for the features and services they provide. I the user jailbreaked his iPhone and removed the “backdoor” I proposed, then the phone cannot be unlocked. Or if the user encrypted his data with 3rd party apps or own measures, than it cannot be decrypted. I wanted to make clear it is not about a key to everything, it is just about a key to Apple’s “default security”.


– The second is impractical, who knows which law is applicable for the phone of an unidentified terrorist, […]

Since the unlocking can only be performed in Apple’s headquarter, only U.S. law applies. If other governments need Apple’s help, the need to ask the U.S. government.


– The third one sounds interesting at first but is flaky, see my example and think of CIA-trained people:
They probably have stolen my device and then provide a lawful request under the name of, say, Bruce Sancho Sanchez, a terrorist, just to get to my data and their opponents.

Then they did something unlawful. This may happen, but is risky for whoever orders and performs it.


– The fourth one is good, at least no secret remote access.
Just you forgot to specify the location of the premises (so we would know whom to bribe for exceptional access).

I’d suggest Apple’s headquarter, to make it as secure as possible. And BTW, this approach also ensures that the “unlocking key” cannot be stolen undetected.

@Figureitout


It’s just you talking out the ass w/ big claims, not an implementation.

This is why I call it a proposal, and not an implementation guide.
First, even if I wanted I couldn’t do that, because it had to be implemented into Apple’s or WhatsApp’s source code, and I have neither of it.
Second, what I do here is my hobby. I don’t have the time (only a prototype would surely take a few weeks), or better said I don’t want to do it, at least not unpaid.


Your first one, you for some reason can’t compute moving encoding/encryption off the device, and continue to state a falsehood that a mitm attack will work on that content and is the only way.

Of course a MITM would work. This is basic crypto, I won’t discuss with you.
And I never claimed it would be the only way. I said it would be how I would do it if I had to do.

@Dirk Praet


Since there is no such thing as a 100% secure algorithm or implementation, neither is there a secure backdoor.

Of course there is a secure backdoor, but not a 100% secure. As well as there is secure communication, but no 100% secure. Why the heck is that not sinking in with you?


And what do you not understand about the fact that rogue software was using the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones?

I already explained that the “lawful wiretapping mechanisms” were most likely only convenient for the attackers, not necessary. And I will not explain you again.


So you keep repeating, Rolf, and so we keep refuting without you listening.

I know, and I know it is because you have no technical arguments against my proposals. You cannot show any (serious) weaknesses in the proposed designs. This is perfectly fine for me.

Again, your argumentation is ideological, almost religious. “There is no secure backbone” is nothing more than a commandment. We techies implement and use “backbones” almost every day. We configure remote access, we implement and use remote updates, we share data and secrets, we build tunnels, we tap the network for troubleshooting, and so on and on and on …
We do this because we are able to weigh the risks against the benefits. But when governments ask us to provide a reasoable secure solution for lawful interception, we think we can take an abolutist stand. And here I simply say: this is not only wrong, it is deeply dishonest.

Dirk Praet March 21, 2016 12:33 PM

@ Rolf Weber

The only reason that can prevent this regimes from trying to push companies like Apple or WhatsApp is that they cannot really compel these companies, because what’s needed can only be enforced at their headquarters, and they are located in the U.S.

You are not listening, Rolf. From a legal vantage, they perfectly can and it doesn’t make one bit of a difference whether or not the headquarters are in the US. If you really believe that China is going to FedEx its iPhones to Cupertino for unlocking, or submit to a US government or court decision whether or not an unlocking request is lawful, then who’s being naive here?

Of course there is a secure backdoor, but not a 100% secure.

You cannot change industry-wide accepted definitions to fit your narrative, Rolf. Once an implementation of any kind contains a deliberate backdoor, it is no longer considered secure, irrespective of who is or is not able to access it. The reason for that is that security is agnostic, not biased (or religious) in how it views adversaries. It only differentiates in their capabilities, not in their intentions, i.e. whether or not they are the good guys that have a really good reason to circumvent it.

As @Clive already tried to explain to you too: in security, there is no such thing as being a little bit pregnant. You either are, or you are not. If that’s too absolutist a position for you, then that’s too bad and good luck with starting up your own security cult.

I already explained that the “lawful wiretapping mechanisms” were most likely only convenient for the attackers, not necessary

Argumentum both ex silentio and ad infinitum. You cannot prove your statement, and on calling you out on it, you just repeat it.

I know, and I know it is because you have no technical arguments against my proposals.

Nobody is saying your backdoors are technically impossible to implement. You’re using a straw man argument. What we are saying is that it cannot be done in a secure way as to prevent abuse, exposing both the complying companies and the product’s users to unacceptable risks.

Subverting key exchanges means compromising one of the most critical security components in any communications application like WhatsApp. For years, people have been trying to make it more secure against MITM attacks. All of which goes out of the window when your benign governments have their way, leaving crippled and insecure applications prone to all kinds of attackers, even from a purely technical vantage.

Whether you like it or not, that’s the general consensus among the vast majority of security professionals. Calling us a sect does not change that. In fact, it’s another informal fallacy known as inflation of conflict, in which disagreement over a certain topic is used to put into question an entire group of experts or the legitimacy of their field.

We techies implement and use “backbones” almost every day. We configure remote access, we implement and use remote updates, we share data and secrets, we build tunnels …

Has it ever occured to you that this is the exact reason why many of our systems and networks are so insecure? All too often what you describe is done by cocky greenhorns with a very limited understanding of, and experience with security, convinced that their take on things is the only right one, even when all industry guidelines and best practices are telling the opposite.

We do this because we are able to weigh the risks against the benefits.

An informal fallacy known as false authority or appeal to authority. Being an experienced system or network engineer does not make you a security or risk management expert. In your particular case, you have failed to produce any formal credentials in either security or risk management, and based on your comments, I doubt anyone on this blog is going to credit you with any either.

Rolf Weber March 21, 2016 5:06 PM

@Dirk Praet


If you really believe that China is going to FedEx its iPhones to Cupertino for unlocking, or submit to a US government or court decision whether or not an unlocking request is lawful, then who’s being naive here?

Since the iPhone can only be unlocked in Cupertino, which other options would China have?


Once an implementation of any kind contains a deliberate backdoor, it is no longer considered secure, irrespective of who is or is not able to access it.

Do you know what’s the problem with your absolutistic argumentation is? How do you define the term “backdoor”? Please tell us what your technical definition of a “backdoor” is? And I bet with you, I can show you plenty of widely accepted and respected implementations that already use such kinds of “backdoors”. But let’s see, so please write down your definition of “backdoor”.

Has it ever occured to you that this is the exact reason why many of our systems and networks are so insecure?

You do not configure remote access? You do not use or implement remote updates? You do not share data or secrets? You do not build tunnels?
Really? Is this your serious?

Sancho_P March 21, 2016 6:11 PM

@Rolf Weber

I’d suggest you rethink your proposal before going public.
It seems no one here supports (understands?) your ideas, that should make you a bit concerned.
Try to discuss your points with a close friend. Such direct communication may give you some hints why we don’t seem to understand you.

Dirk Praet March 21, 2016 8:51 PM

@ Rolf Weber

Since the iPhone can only be unlocked in Cupertino, which other options would China have?

The day China wants the same capability the FBI is asking for, they’re not going to settle for unlocking in Cupertino and with US permission only. That scenario only exists in your dreams. Ask the guys at Blackberry.

Please tell us what your technical definition of a “backdoor” is?

Ever heared of Wikipedia? It’s a good starting point.

You do not configure remote access? You do not use or implement remote updates? …

Stop using straw man arguments. That’s not what I said.

Rolf Weber March 22, 2016 5:51 AM

@Sancho_P


I’d suggest you rethink your proposal before going public.
It seems no one here supports (understands?) your ideas, that should make you a bit concerned.

No, of course I anticipated that. I’m not a dreamdancer. I know what to expect from people who still believe in Snowden fairy tales …
My intentions are others, and I got the answers I wanted, believe me.

@Dirk Praet


The day China wants the same capability the FBI is asking for, they’re not going to settle for unlocking in Cupertino and with US permission only. That scenario only exists in your dreams.

Again, they cannot be compelled. It is only possible to try to blackmail them. I know you don’t get the difference.


Ask the guys at Blackberry.

Blackberry could not be compelled. Blackberry is the best examples that regimes will try to push, independently from what happens in the U.S. And whether companies like Blackberry surrender to blackmailing is also independent from what happens in the U.S.


Ever heared of Wikipedia? It’s a good starting point.

In Wikipedia I read a lot of “often”, “may”, “might” etc. I hoped your definition would be a bit more specific, because you claim very absolutistic that all backdoors would be inherently insecure. I wonder how it is possible to make such an absolutistic claim about something that’s so vaguely defined. So again, your claim sounds more like a religious belief.

And I don’t find in Wikipedia that all backdoors would be insecure. But I find this:

“Asymmetric backdoors
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto ’96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.).”

That’s exactly what I propose. And to add, in my case the “attacker” is the manufacturer or service provider itself.


Stop using straw man arguments. That’s not what I said.

You asked, after I listed a few examples of what we technicians often do (and what is in fact implementing or using “backdoors”):
“Has it ever occured to you that this is the exact reason why many of our systems and networks are so insecure?”

So I asked you if you do this too or not. Would you care to answer?

Dirk Praet March 22, 2016 1:12 PM

@ Rolf Weber

I know what to expect from people who still believe in Snowden fairy tales

Since that has been established for quite a while, what on earth are you still doing here? Too few people visiting your own blog and no one to discuss with?

Again, they cannot be compelled. It is only possible to try to blackmail them.

Yes they can, Rolf. Ask any lawyer. Ask @Skeptical. A company that does not comply with the laws of the land in the jurisdiction of which it operates ends up paying serious fines, sees folks go to jail or faces sales prohibitions. The only way to escape such sentences is to back out of that jurisdiction, as in any common criminal fleeing the country he perpetrated his crime in. And in doing so foregoing millions, if not billions in revenues. That’s not blackmail. That’s the way the law works.

But I find this: Asymmetric backdoors

I am familiar with asymmetric backdoors, also known as kleptography, thank you. Asymmetric backdoors seriously increase the burden on any attacker, but are still toast in case of a key compromise or a flawed implementation. Same thing with PGP: once your secret key is compromised, it’s game over. So they’re still not secure, just less insecure than their symmetric counterparts. I leave it to @Bruce, @Clive, @Thoth and @Anura to provide the more technical insights into the matter.

So I asked you if you do this too or not. Would you care to answer?

The point I made was that all too often folks with limited experience in security and risk management go about such stuff in an entirely careless way because they’re either unaware of, or simply don’t understand concepts and best practices, pushing their own flawed interpretations thereof instead. With sometimes devastating consequences.

Let me put it simply: any SE introducing unauthorised backdoors on an infrastructure under my control gets fired on the spot. Same goes for any crew member obstinately pushing his own interpretations of commonly accepted definitions and practices, not listening to a word his team leader is saying, in the process calling bullsh*t and fairy tales anything he doesn’t agree with.

@Clive has already warned you to be a tad more careful with both your lone wolf opinions and overall attitude here. And I’m doing it again. If ever you need to apply for a new job, I hope for your sake that the employer or his technical interviewer never reads your comments on this blog because if ever they do they’re never gonna let you anywhere near their facilities.

Rolf Weber March 22, 2016 4:44 PM

@Dirk Praet


Asymmetric backdoors seriously increase the burden on any attacker, but are still toast in case of a key compromise or a flawed implementation. Same thing with PGP: once your secret key is compromised, it’s game over.

Exactly. Maybe you finally get it?
The same is true for ssh, IPSec, HTTPS and so on: All these protocols we rely on for online shopping, online banking, private conversations and so on also require that keys are not compromised and implementations are not flawed. Exactly the same is true for asymmetric “backdoors”. So even you seem to agree that while asymmetric “backdoors” are of course not 100% secure, they are as secure as eg our daily online banking. That’s all I wanted to hear.


If ever you need to apply for a new job, I hope for your sake that the employer or his technical interviewer never reads your comments on this blog because if ever they do they’re never gonna let you anywhere near their facilities.

LOL. I answer you the same as him: Other than you or him, I comment here with my real, verifiable identity. I stand to what I say. Here as well as in real live — I deliberately make it very easy to find me in real live. You don’t. So thanks but no thanks for your “care”.

Dirk Praet March 22, 2016 5:37 PM

@ Rolf Weber

So even you seem to agree that while asymmetric “backdoors” are of course not 100% secure, they are as secure as eg our daily online banking. That’s all I wanted to hear.

Sigh. What we’ve been tring to explain to you all along is that since there is no such thing as a bulletproof application, neither is their a bulletproof (or “secure”) backdoor. The only thing the backdoor does is adding an additional attack surface to the application, thus rendering it even more vulnerable or insecure. And which is something no security professional will ever recommend, and for whatever reason. As I told you before, security is agnostic. It does not differentiate between “good” and “bad” attackers since that is just a question of perception.

How this vindicates your opinion that it is possible to create secure backdoors accessible by their authors or priviliged users only, frankly, is beyond me.

I stand to what I say. Here as well as in real live

Good luck with that. It’s gonna make you unemployable in any security related job.

Buck March 22, 2016 5:42 PM

@Rolf Weber

So even you seem to agree that while asymmetric “backdoors” are of course not 100% secure, they are as secure as eg our daily online banking.

Almost, but I think you have forgotten to consider the amount of resources that an attacker can reasonably devote to kleptography… If it takes a decent amount of effort to break into each of the banks’ “backdoors” and all of the third-party monitoring systems, then a single asymmetric “backdoor” which provides access to all of these systems would certainly prove to be a much more attractive target.

I feel like you may be conflating different aspects of your argument here though. You said you wanted to require physical access, but what does that have to do with our contemporary remote-access techniques?

Niko March 22, 2016 8:22 PM

@dirk

Then, I’m at a loss what you’re getting at, fight or stall what in whose court? In a truly authoritarian regime, it’s almost impossible to fight or stall anything in court, when you’re facing off against the national government. Almost by definition, an authoritarian regime is one in which an independent judiciary does not exist. The unreasonable burden test comes from the US vs New York Telephone Company case. Even in a country like India, with an independent judiciary, it’s not clear what the relevance is of an “unreasonable burden.”

Clive Robinson March 22, 2016 11:03 PM

@ Niko,

… it’s not clear what the relevance is of an “unreasonable burden.”

An unreasonable burden is like the reasoning about that of the man on the Clapham Omnibus.

It’s all about what is considered “reasonable or not by society in general”, which quickly gets complicated.

In many societies there is the idea –sometimes put into statute– that society has common interests, one of which is that it is desirable for bystanders to assist in emergencies either other citizens or the authorities. It’s similar reasoning as to why there is the codified notion of “citizens arrest”. It’s also the flip side of “self defence”.

Because the nature of emergancies are variable, and likewise the skills, strengths and abilities of individuals things have to be descided on a case by case basis. For instance if a police officer asked for assistance with restraining a criminal, whilst it might appear reasonable to ask an apparently fit and well young male, would it be reasonable to ask a seemingly frail old lady? What if the criminal is violent? What if the Police officer is in imminent danger? What if the Police officer is asking because he wants to go off for a smoke?

The test is “what the man sitting on the Clapham Omnibus would consider reasonable” if he were in the same position.

Thus three things have to be considered the first is if and what the emergancy is? The second is the capabilities of the officer asking for assistance and any harms that might arise if it’s not given? And third the capabilities of the citizen being asked and any harms they might suffer in giving assistance? From these other questions naturally arise dependent on the situation.

I would argue that the FBI fails at the first test, there is not nor was there an emergancy when Apple was asked to give assistance under the AWA (Though I’m sure Jim Comey & Co will try to argue there is especially after the events in the EU today).

Apple had clear and reasonable assistance procedures in place when originaly asked for assistance. The FBI knowing this chose to go against that advice, arguably making an emergancy that might be considered worse (the FBI have not offered a reasonable defense for their actions, which many consider a significant “tell”).

Further there was a considerable period of time when the FBI chose not to do anything (another significant “tell”). Thus hardly a “clear and present danger” a reasonableness test used for defining an emergency in which self defence is permissible, thus can be used to judge.

Secondly is the capabilities of the LEO making the request. The FBI prides it’s self on being the foremost agency for forensics in the US to which all LEOs are invited to train etc. They are in effect the US “Gold Standard”. They have capabilities and resources equitable with that national status, and are thus required to maintain advanced capabilities well in advance of others. For what ever reason the FBI chose not to develop or procure those capabilities, instead they have bleated on about “going dark” and asking for legislative changes the elected legislators have made clear they were not going to grant “for the good of society”.

This suggests that society or atleast those who represent them do not think that the FBI should “reasonably” have what they ask for.

It’s the same argument for the use of flamable materials in making buildings. Laws could be easily passed to stop the use of all flamable materials, however a reasonableness assesment is made on the “cost to society” and so far on balance the cost of not using them outweighs by some considerable amount the cost of using them.

Some would argue that the FBI have become “fire starters” trying to change the balance by deliberatly making “emergencies” where none would otherwise have existed (ie talking idiots into becoming wannabe jihadists and by giving them access to what would otherwise not be available to them get them to act as terrorists).

All of which begs the question of if the FBI senior managment have quite deliberatly and with quite serious intent chosen not to developed a technical capability they might otherwise have quite reasonably aquired against what they claim is a very real and persistant and serious threat? (though they have never given credible evidence that it is a serious threat just some scenarios that are at best contrived, and even if they did happen, would get no where near the tipping point on the societal costs).

I could go on but I’m sure others will want to give counter arguments at this point.

Clive Robinson March 22, 2016 11:43 PM

@ Rolf Weber,

So even you seem to agree that while asymmetric “backdoors” are of course not 100% secure, they are as secure as eg our daily online banking. That’s all I wanted to hear.

Oh dear, oh dear, oh dear…

Thus you show for all to see you are quite deliberately ignoring other quite important aspects which makes your position at best fanciful.

The thing about backdoors are that they are not a choice of the data owner (who in the case of kleptography in asymetric keys can not even reasonably show that the backdoor exists).

Thus they are denied control in the dimension of time, and likewise in other domains.

You argue that such an asymetric key backdoor is the equivalent of SSH etc in security risks, therefore backdoors are no more harmfull.

It’s a false argument, you can only argue by deliberatly ignoring –as you do– that with SSH the data owner has knowledge and control, but with a backdoor has no knowledge and no control.

Thus the ability of the data owner to assess and mitigate risk is denied to them with a backdoor.

Further, with SSH the data owner can put in place mitigations via instrumentation. That is they can observe traffic and detect unauthorised activity. That is most definately not the case with a backdoor using klepto on asymetric keys.

Thus as always your argument is both flawed and wanting, if not quite deliberatly deceitful.

Dirk Praet March 23, 2016 9:18 AM

@ Niko

In a truly authoritarian regime, it’s almost impossible to fight or stall anything in court, when you’re facing off against the national government.

Exactly. And that’s why you need all the leverage you can possible get.

Although, admittedly, I have no idea what the Chinese, Russian, Saudi or even Indian statutes governing such requests are or whether the procedure would be adversarial or ex parte, I guess it’s safe to assume that you don’t either.

Even authoritarian regimes have some kind of judicial system in place where even in show trials certain formal conditions have to be met. They also may or may not be signatories to international treaties and covenants that may or may not apply and provide some kind of leverage either to the defense or the USG that is well-known for pulling the international law card when it suits them. And nobody likes a diplomatic row with the US.

All I’m saying is that any possible defense arguments will be significantly weakened if in the country of origin of the company – in this case the US – there is no recourse either against such requests, whether under domestic or international law.

Rolf Weber March 23, 2016 4:31 PM

@Buck


I feel like you may be conflating different aspects of your argument here though. You said you wanted to require physical access, but what does that have to do with our contemporary remote-access techniques?

These are 2 different things. With the HSM located in Apple’s headquarter I just want to “harden” my proposed “backdoor”, because it ensures 1. that unlocking can only be performed on Apple’s premises and 2. it is impossible to copy the key undetected.
With the examples of remote access or online banking I wanted to illustrate that these techniques are not 100% secure either. We use them anyway, because we consider them secure enough. And there is no reason we shouldn’t handle “backdoors” the same way: They do not need to be 100% secure, only reasonable secure. And of course is a carefully implemented asymmetric “backdoor” reasonable secure. At least nobody here in this forum could disprove so far.

@Dirk Praet


The only thing the backdoor does is adding an additional attack surface to the application, thus rendering it even more vulnerable or insecure.

Again, I don’t deny that a “backdoor” potentially renders a system more insecure. But if implemented carefully, the risks are marginal. And it is simply not true that this is the “only thing” a “backdoor” does, it gives us on the other side the benefit of being able to perform lawful interception.
And exactly the same is true, for example, for remote admin access to a firewall. This also renders the system potentially more insecure. But we do it because of the benefits.

Security has nothing to do with your absolutistic, dogmatic and cloistered views. Security is always weighing the benefits against the risks.

@Clive Robinson


The thing about backdoors are that they are not a choice of the data owner

This is (at best) an ethical argument, not a technical one.


Thus the ability of the data owner to assess and mitigate risk is denied to them with a backdoor.

Huh? Of course the “data owners” would know that smartphones are unlockable by manufacturers and messages are decryptable by service providers. And of course the could mitigate by adding an own encryption layer or using other services or products.

Dirk Praet March 23, 2016 9:12 PM

@ Rolf Weber

Security is always weighing the benefits against the risks.

If I read you correctly, the essence of your claim in this discussion boils down to the benefits of government mandated backdoors outweighing the risks. That is the outcome of your very own and very government biased risk analysis, not that of an objective security assessment. That’s two different things you shouldn’t conflate.

Even if we were to agree that it’s technically possible to implement “reasonably” secure backdoors – and which we don’t – there still are a number of legal, operational, societal and other elements to take into consideration and which further tilt the balance from a risk analysis point of view.

One last time: security does not differentiate between state and other actors introducing vulnerabilities, whatever their reasons for doing so and irrespective of whether or not slavery, the gassing of Jews or the deportation of Muslims is lawful under their jurisdiction. That’s a completely artificial and politically motivated distinction that only exists in your mind and that of other government cronies.

Whether you like it or not, no impartial security professional or risk analyst with any relevant experience in the field sees it your way. Not on this blog and not in the real world. If you can’t live with that, please go find yourself another forum to troll with your ill-informed opinions, obsessive ramblings, abusive attitude and fallacy riddled arguments.

Clive Robinson March 23, 2016 10:19 PM

@ Rolf Weber,

This is (at best) an ethical argument, not a technical one.

No it is not an “ethical argument” when it comes to assessing risk and mitigations, and it casts considerable doubt on your technical abilities for saying so.

Huh? Of course the “data owners” would know that smartphones are unlockable by manufacturers and messages are decryptable by service providers.

Well most people would disagree with you, as has been witnessed by the upset of even US&UK Politicians to discover the IC of their own nations is monitoring all their communications. Then of course there was the upset in Germany with regards their own IC monitoring their own politicians and of monitoring for the Five Eyes.

Prior to the Ed Snowden revelations very few knew that this was happening. Of those that did and said what they could, their warnings were either ignored or they were told –by those who did not know– that they did not know what they were talking about (if you look back on this blog far enough you will see it).

The reality of life is you can only mitigate something if you know what it is and how it works.

Whilst it is possible to extend the secure communications end points beyond the phone, it only achives a mitigation if the extention can be trusted.

The whole point of kleptographic backdoors in asymmetric keys, covert inband backdoor channels in stream ciphers and other backdoors in algorithms is that you can not trust the fundementals of the encryption and modes / protocols you use. Even if you laboriously carried out the calculations using pen and paper.

To mitigate you have to have a trusted base, and backdoors in crypto algorithms do not allow you to build a trusted base.

If you do not understand this or chose to ignore it, then there is no hope for you or the security of any systems you are involved with.

But as you chose to mention ethics earlier, a thought for you, what gives any collection of people the ethical right to spy on other people going about their activities?

And please don’t fall into the falacy of “good-v-bad” it’s a human interpretation on the motives of a “controling mind” over an agnostic process.

You have to decide between anybody and everybody spying on you, or nobody spying on you, there is no middle ground with agnostic processes. Anybody who thinks otherwise either does not know what they are talking about, or are handwaving over the issue because they want to exploit it.

Rolf Weber March 24, 2016 2:40 AM

@Wael

Just saw your comment in the other forum. The paper you cite is IMO a red herring, because it only discusses backdoors with a direct government access. I absolutely agree that this is a bad idea, and it is not what I propose.

I want laws that require from companies under their jurisdiction to be able to decrypt or circumvent their own encryption. No more, no less. And I showed that it is possible to implement reasonably secure.

And this would change very little compared to status quo. It would only affect companies like Apple or WhatsApp who tried to benefit from the Snowden hysteria with PR stunts that they are (allgedly) no longer able to break their own security.

And it would change very little in practice. Law enforcement still needs to get a warrant and approach the company with it. If the company can respond, all is good, like today. If not, they are fined, or managers jailed.

Rolf Weber March 24, 2016 3:09 AM

@Dirk Praet


Even if we were to agree that it’s technically possible to implement “reasonably” secure backdoors – and which we don’t – there still are a number of legal, operational, societal and other elements to take into consideration and which further tilt the balance from a risk analysis point of view.

I already agreed that there may be a lot of other good reasons against “backdoors”. And I wouldn’t rule out that these other reasons may prevent the Americans from introducing “backdoors”. But again, you have no technical arguments. If the American society demands it, it is securely implementable.


Whether you like it or not, no impartial security professional or risk analyst with any relevant experience in the field sees it your way.

Again, this are the same “professionals” who believed (some even still believe) in the Snowden fairy tales. I opposed them there, and it turned out that I was right. It will be the same here. The “professionals” again have no factual arguments — just as with the Snowden hoax.

@Clive Robinson


You have to decide between anybody and everybody spying on you, or nobody spying on you, there is no middle ground with agnostic processes.

This is the key point were we disagree. Let’s take WhatsApp as example. I agree with you that with my proposal, the government is able, or has the power to spy on everybody who uses WhatsApp. But there are many other limitations for the government, for example:
– The law.
– Internal and external oversight.
– WhatsApp’s transparency reports showing how many people were actually monitored.
– Courts refusing anlawful obtained evidence.

The difference between you and me is that I have some trust in these other restrictions, you have non at all. And this is why you oppose any “backdoor”, regardless how secure it is implemented, while I am in favour of reasonable secure “backdoors”.

Wael March 24, 2016 4:03 AM

@Rolf Weber,

because it only discusses backdoors with a direct government access. I absolutely agree that this is a bad idea, and it is *not* what I propose.

Got it. We have a terminology misunderstanding! What you are advocating isn’t called a backdoor — it’s a manufacturer’s “Foot in The Door”, which exists, by the way, for several implementations, but not all of them. Seems we aren’t talking about backdoors then! At least we agree backdoors are irrepressible! What’s being debated here is the deliberate weakening of Operating Systems to allow access without the “owner’s” control.

And I showed that it is possible to implement reasonably secure.

Ignoring terminology, there are still problems! How do you guard against insider attacks? The so-called backdoor has to be designed and implemented by architects and developers! What’s to stop one of them from using their proprietary knowledge to aid other attackers, sell the secrets, or being coerced into giving away the needed knowledge and tools? There are solutions to this by the way, but they are costly. For example the keys to the manufacturer-access “backdoor” must be stored in a tightly controlled HSM. To decrypt data, four things will be required:

  1. Physical access to the device,
  2. Access to the HSM (which never releases encryption keys, but does the key management and performs decryption operation.)
  3. Having a master key for all devices isn’t very clever.
  4. Each device will need a separate key.
  5. A warrant.

And this would change very little compared to status quo.

This is true for firmware based security; a trusted application running in Secure World, for example because it’s software is under the manufacturer’s (or MNO’s) control — one of the legitimate tenured owners of record of the asset[1] . It’s not necessarily true for systems that use HW crypto processors or a discrete HW TPM component without introducing significant changes and possible hardware backdoors into chips. This will adversely affect the security posture of such systems. I mean if a TPM has known backdoors, can we really call it “TPM”, knowing that “T” stands for “Trusted”? Can we trust an HSM with known backdoors? Look, backdoors are the worst idea ever crapped out of the Bowles of security whippersnappers.

Law enforcement still needs to get a warrant and approach the company with it.

I am not opposed to this.

If not, they are fined, or managers jailed.

Great idea! Now that’s one way to get your manager fired 🙂

[1] Before you talk about the “Security” posture of a system or an application, tell me what your definition of security looks like in the context of this thread discussion.

Wael March 24, 2016 4:13 AM

@Rolf Weber,

I had a formatting error. Item 3 in the requirements wasn’t supposed to be in this place. Or you can call them 5 requirements rather than 4.

Again, this are the same “professionals” who believed (some even still believe) in the Snowden fairy tales.

Oh, hush now! “Snowden and the seven spooks” is an interesting story!

Dirk Praet March 24, 2016 6:47 AM

@ Rolf Weber

But again, you have no technical arguments. If the American society demands it, it is securely implementable.

Yes we do. You’re just not listening. And how many times do we have to explain to you that this is not just a technical issue? Nobody supports your opinion, not in the security community and not even in the IC. Even former NSA chief Hayden rejects the idea. On this blog, not even @Skeptical is backing you up on mandated backdoors. It should give you a clue as to how dead wrong you are.

If tomorrow the American society under the enlightened leadership of newly elected president Donald Trump decides they’re going to ban Muslims, deport Mexicans and hang Black Americans because technically they can, would you support that too? I’m pretty sure there are interesting KKK studies out there affirming that getting rid of these groups would significantly decrease crime in the US.

Again, this are the same “professionals” who believed (some even still believe) in the Snowden fairy tales. I opposed them there, and it turned out that I was right.

This is too embarrassing for words. First of all, this discusssion has NOTHING to do with Snowden. And as to your being “right” about him, there seems to be something you don’t understand about the concept of arguments. In general, whether it be in a court case or an academic paper, it is held that a thesis is proven when a majority of experts or jury of peers agrees with it. As in the current discussion, nobody went along with your allegations. In most societies, convincing nobody but yourself is insufficient to have an opinion vindicated. It’s actually considered more of a trait of a sociopath.

You may wish to remember that if ever you have to stand trial and decide to call the judge incompetent because he doesn’t believe in climate change, as a consequence of which he must be wrong too in dismissing the entirely unconvincing defense you have put up.

ianf March 24, 2016 7:05 AM

Dirk, save yourself the bother. You can’t argue with a broken record, so your & everyone else’s here best tactic is the advice given by QEII to son Prince Charles on his wedding night to one Diana Spencer: “lie back, bite your teeth, and think of England” (complete with Oxford comma).

Rolf Weber March 24, 2016 5:02 PM

@Wael


Seems we aren’t talking about backdoors then!

Yes, this is why I write my proposed “backdoors” always in brackets … 🙂


1. Physical access to the device,
2. Access to the HSM (which never releases encryption keys, but does the key management and performs decryption operation.)
3. Each device will need a separate key.
4. A warrant.

That’s exactly what I proposed!


Can we trust an HSM with known backdoors?

Most likely not, I agree.
But my proposed “backdoors” do not require HSM backdoors, at least AFAICS.


“Snowden and the seven spooks” is an interesting story!

LOL. 🙂
You should claim copyright for this term!

@Dirk Praet


If tomorrow the American society under the enlightened leadership of newly elected president Donald Trump decides they’re going to ban Muslims, deport Mexicans and hang Black Americans because technically they can, would you support that too?

I’m confident Trump will not be elected. But I’m also confident that even if he is elected, there will be no radical changes in American policy. The American democracy is robust enough to even stand Trump.


First of all, this discusssion has NOTHING to do with Snowden.

Not directly. Snowden is only an example on how much the so-called “experts” can err. Most of these “experts” you uncritically rely on were tricked by the Snowden fairy tales. At least since this experience I only trust arguments, not big names. You don’t, I know. So of course it was you who believed in Snowden’s “direct access” fairy tale, not me.

Wael March 24, 2016 5:45 PM

@Rolf Weber,

You should claim copyright for this term!

‘Snowden and the seven spooks’©

Your fellow German @Benni[1] inspired me when he repeated the word “spook” seven times!

By the way, we only covered “confidentiality” issues with backdoors. We didn’t even talk about repudiation and impersonation, as in framing some innocent person…

[1] He’s been missing in action for a few days now, you didn’t rat him out, did you?

Say, What’s pink and has seven little dents in it? Snowden’s laptop 😉

Sancho_P March 24, 2016 6:17 PM

@Rolf Weber

” – the unlocking is performed on the manufacturer’s premises”

Hilarious! You still go that ”reasonably secure” location route …
And how to tell the device that it’s in the right location for ”lawful” access?
Any of GPS, Glonass, cell tower or WiFi info, facial recognition (e.g. Tim Cook), fingerprint reader or vocal parole-confirmation (e.g. “dream dancer”)?

Or would you, to further increase backdoor “security”, combine some of them?
– Oh, we could add some AI to ”harden” your system, e.g:

[[ inside (!!!) the phone, AI at work ]]:
Let’s see what we have, the phone seems to belong to “Kim Jong-un”, (ha-ha, must be a nickname, sucker), the actual GPS is Peking (that’s bad, not our HQ), the fingerprint is from Putin (bad, too), so both dismissed because of inconsistency.
However, the mugshot of Tim Cook is in “morning” mood (OK, it’s 9:00am in Cupertino), cell tower is NYC (seems to be wrong, but cell tower info has low value) and we’ve called Rolf Weber’s phone, after seven times asking for the parole the “dreamdancer” sounds like angry Rolf (OK, confirmed), that’s enough, so we’re ready to accept the golden key for backdoor access (= flag set for the HSM).
[[ end of AI security deliberations ]]

Rolf + AI, now you got me, this sounds ”reasonably secure” ;-)))

Niko March 24, 2016 9:51 PM

@dirk

I do know that Chinese prosecutors have a 99.93% conviction rate and that number goes even higher for cases that are widely reported in the news. I’m sure that any who thinks the literal language of the Chinese statutes or whether a proceeding is adversial or ex parte are deciding factors in Chinese court cases, is completely missing the big picture regarding the Chinese judiciary. You’re certainly right that the USG has lots of means of leverage with China and to some extent diplomacy would be more difficult with US backdoors. However, you’re embedding in a large assumption that the USG currently lobbies governments not to ask for backdoors. That may or may not be true, but it’s a large assumption.

Dirk Praet March 25, 2016 8:32 AM

@ Rolf Weber

I’m confident Trump will not be elected. But I’m also confident that even if he is elected, there will be no radical changes in American policy.

Look up the word “metaphor” some time. Perhaps I should use another one: while technically it may be possible for you to shout out at Teheran central market square that Ayatollah Khomeini was obviously gay and knew nothing about Islam, it would probably not be a bright idea.

Not directly. Snowden is only an example on how much the so-called “experts” can err.

Stop using known fallacies to argument your case, Rolf. Who was wrong and who was right about Snowden has no bearing whatsoever on this topic. May I take it that after a few bad experiences with women you also stopped dating because “all women are evil” ?

@ niko

However, you’re embedding in a large assumption that the USG currently lobbies governments not to ask for backdoors. That may or may not be true, but it’s a large assumption.

My assumption is that the US will invariably engage in backroom diplomacy where ever it sees events developing that are contrary to its security, economic or other national interests. That’s hardly a leap of faith. Foreign governments asking US companies for backdoors in US software that could eventually be used for spying on US targets may hardly be an issue anyone in the IC, at the DoD, DHS or State Department could be very enthusiastic about.

Unless, of course, they all believe it’s no problem because the internationally acclaimed security and cryptography expert @Rolf Weber who debunked the Snowden myth has said it can be done in a “reasonably” secure way.

Rolf Weber March 25, 2016 4:23 PM

@Wael


By the way, we only covered “confidentiality” issues with backdoors. We didn’t even talk about repudiation and impersonation, as in framing some innocent person…

No, but so far in practice this was not even a big issue with unencrypted email …


He’s been missing in action for a few days now, you didn’t rat him out, did you?

Not wittingly. 😉

@Sancho_P

Time again for a good avise: Don’t drink and write.

And BTW, law enforcement needs to have physical possession of the phone.

@Dirk Praet


while technically it may be possible for you to shout out at Teheran central market square that Ayatollah Khomeini was obviously gay and knew nothing about Islam, it would probably not be a bright idea.

Fine. But I never claimed that everything technically possible would be a good idea. I only say it is technically possible. Good that you seem to agree eventually.


Who was wrong and who was right about Snowden has no bearing whatsoever on this topic.

Correct, but it should only show that you need better arguments than just refering to “experts” who already told us so many stupid things about Snowden and his “revelations”. It didn’t impress me that all these “experts” had other opinions than me about the Snowden fairy tales, and it doesn’t impress me now that they have other views about “backdoors”. Good technical arguments would impress me, but they are rare.

Wael March 25, 2016 4:53 PM

@Rolf Weber,

No, but so far in practice this was not even a big issue with unencrypted email

If you have a backdoor that extracts a private decryption key and a private (hopefully different) signing key, then nonrepudiation is gone. A backdoor is a backdoor: it allows low level access to everything, especially if it’s a backdoor designed to extract HW keys. Anyone who gains access to the back door is capable of impersonating the owner. They can look at decrypted emails, perform transactions on their behalf, and sign documents on their behalf. They can also perform actions that can get the owner in deep trouble …

Backdoors also enable overwriting of public keys giving the ability to install arbitrary software and images.

Sancho_P March 25, 2016 6:01 PM

@Rolf Weber

Wait, I only had tea until now, I guess this is why I didn’t get it:

First you wrote:

“NOBUS so far has only been discussed under the premise that the “us” is a third party, like the NSA or the FBI. But that’s not the case for “backdoors” like I propose, where the “us” is not a third party, but the manufacturer (like Apple) or the service provider (like WhatsApp).” [my emph]

then you wrote:

“For example smartphone manufacturers could implement it [the backdoor] in a way that only they themselve, only on their premises, and only if in physical possession of the phone, can unlock it.” [my emph]

and:

“- the manufacturer is in physical possession of the phone
– the unlocking is performed on the manufacturer’s premises”

and similar, many times more in this thread, so I guess it wasn’t a mistake.

Now you write:

”And BTW, law enforcement needs to have physical possession of the phone.”

I’m confused, should I go for some wine now to understand you?
Manufacturer or law enforcement have physical access?
Or is it the same, Apple being the LE?
Or both together, and where is the judge?
In US or Chinese premises?
Chinese LE acting in Cupertino?

But the question remains: How does the phone know that everything is “legal” and it is in the “right” possession, not in Putin’s lab (remember the fingerprint reader)?

Or would you propose to protect it by a padlock with three keys, one from the manufacturer, the other from the (appropriate) LE, one from the warranting judge?

Geez, a mechanically secured backdoor, that’s the solution!
– But how would the lock know who turns the keys?

Thanks, Rolf, you made my day, better than Mr. Bean could have done it 🙂

Rolf Weber March 25, 2016 6:23 PM

@Wael


A backdoor is a backdoor: it allows low level access to everything, especially if it’s a backdoor designed to extract HW keys.

But my proposed “backdoors” don’t need to extract keys, let alone HW keys.

Where do you see a problem regarding the proposed Apple and WhatsApp “backdoors”? Regarding Apple, I don’t see any at all. And regarding the WhatsApp MITM, messages could be spoofed, but not using the real users keys, so it would not stand in court.

@Sancho_P


I’m confused, should I go for some wine now to understand you?
Manufacturer or law enforcement have physical access?
Or is it the same, Apple being the LE?
Or both together, and where is the judge?

Law enforcement seizes a phone (has physical access now), gets a warrant (from the judge, you know?), and brings the phone to the manufacturer (has physical access now) to have it unlocked. Just like the FBI would like to have it in the San Bernardino case. Cheers!

Sancho_P March 25, 2016 7:03 PM

@Rolf Weber

” … to have it unlocked. Just like the FBI would like to have it in the San Bernardino case.”

As I understood that’s not what the FBI wanted Apple to do?

The question remains why only the manufacturer should be able to access the phone, how does the phone know it’s the manufacturer to try to access and not an adversary with stolen credentials / knowledge?

If there is access it will be abused.

(Let alone the burden on the manufacturer for any claim of theft of IP or other potential abuse / loss, regardless how anyone got any information which might have been stored on any phone, worldwide.)

Niko March 25, 2016 7:04 PM

@dirk

There’s at least 2 issues with that theory.
1) Only a small subset of software is approved for official government communications. It’s not clear the USG would worry about software that they don’t use.
2) If the backdoor is for example, limiting the key size to 56-bits, or using a weak algorithm, there’s no reason a company couldn’t sell a standard, secure version of their software in the US and a cryptographically weakened version of the software to China. Creating that type of “backdoor” for China wouldn’t weaken the security of US users at all. US export regulations seem designed to prevent software from being sold that is too cryptographically strong rather than too weak.

Wael March 26, 2016 12:52 AM

@Rolf Wrber,

But my proposed “backdoors” don’t need to extract keys, let alone HW keys.

Then let’s end the confusion and call a spade a spade. The “backdoor” you’re talking about is nothing more than post manufacture forensic instrumentation, which is markedly different than a backdoor, without the quotation mark delimiters.

Rolf Weber March 26, 2016 2:05 AM

@Sancho_P

It’s ok when you don’t like my proposed smartphone “backdoor”, but if you want to discuss it with me, you should at least read and understand it:
https://plus.google.com/+RolfWeber/posts/fPK3DyfYdNG

@Wael

I think most people I discuss with would call my proposals “backdoors” as well, so I think I’d even enhance the confusion with another term.
Look at Skype’s interception interface: Most people call it a “backdoor”, and it is most likely implemented very similar to my WhatsApp proposal.

Wael March 26, 2016 3:36 AM

@Rilf Weber,

I think most people I discuss with would call my proposals “backdoors” as well,

If the mechanism requires the manufacturer’s assistance and requires the device to be physically present, then it isn’t a backdoor. If the mechanism allows a non-owner to remotely gain access (real-time or delayed), then it qualifies as a backdoor. If the mechanism requires a deliberate weakening of a crypto algorithm such as Dual_EC_DRBG, then it also qualifies as a backdoor. If the mechanism uses an unadvertised separate channel for control or to exfilterate data after encrypting it with a specialized provisioned key, then that’s a backdoor as well.

Wael March 26, 2016 4:36 AM

@Rolf Weber,

Read your Google+ proposal…

The goal is to store the PIN or passphrase of the user encrypted on the phone, so that only the manufacturer can decrypt it, and avoiding the usual attacks.

This is a manifest weakness. You almost never want to save a clear text or an encrypted representation of a passphrase or a PIN either on the device or on a server. Storing a hash maybe ok. The preferred method is to to use a KDF function based on a passphrase (PBKDF2, for instance). Still both are vulnerable to key loggers (if the device is jail broken.) The better method is to seal the key to a platform state (no comparison is required) the decryption key won’t be unsealed if the pass phrase isn’t correct or the platform state has changed (see bit locker for example.) Bypassing this mechanism will almost certainly fundamentally complicate the stack or require the introduction of a HW backdoor, which could be triggered by a magic packet or some other command and control event. Such mechanisms will expand the surface of attack.

Some critics may say that bugs in algorithms or implementations may result in exploits.

Slight correction: they have resulted in exploits. Manufacturers that purposely introduced them were fined as well! I don’t want to give references, but I’m certain you either heard about a few of them, or you can find out after some research.

That’s correct, but we live with bugs anyway.

I see! What’s an extra line on a zebra? We have a few bugs, why not sprinkle some more! 😉

Every user who wants an “extra security” could additionally encrypt his data with other tools, he could use a “custom ROM” on his phone, or he could jailbreak or root it and then remove the “key escrow” (I’m pretty sure that instruction HOWTOs would emerge very quickly). Nobody would do something illegally then.

You don’t think the bad guys will do that, or are you assuming they don’t read English on this blog and other “how to” sites? However, if the backdoor is sitting really deep in the stack, even a custom ROM won’t bypass it. So your argument fails from two and a half perspectives: either the bad guys have the ability to remove the backdoor (defeats the proposed backdoor), or the bad guys (and Every user) can’t remove it (negates the first sentence: “every user who wants an ‘extra security’…” So what’s the purpose of your proposed backdoor, to catch idiots who can’t follow a “how-to” list of instructions? Additionally, a properly implemented backdoor will be very difficult to detect and bypass and will also defeat: “could additionally encrypt his data with other tools”.

Clive Robinson March 26, 2016 5:16 AM

@ Wael,

Additionally, a properly implemented backdoor will be very difficult to detect and bypass and will also defeat: “could additionally encrypt his data with other tools”.

A minor correction “difficult to detect” should be “impossible to detect” in human terms and likewise impossible to bypass within the device.

It’s already know how to do this with all mathmatical –as opposed to logical– crypto algorithms due to “redundancy”. It’s been known to be the case since the 1980’s. I’m away from my “dead tree cave” at the moment so don’t have the refrence to the paper given at a Crypto Conferance.

As I’ve repeatedly said for around a decade and a half before the Ed Snowden revelations, you need to move the end points not just beyond the devices you don’t have full control on, but also put the human in the chain.

Anyone not doing this will get “owned” at some point as a matter of course, the only unknown is if anyone will make them aware of it by other actions (blackmail / public humiliation etc).

And as I’ve also repeatedly said, even moving the end points is not of it’s self sufficient. If you are of sufficient interest then “end run” attacks will happen. With IoT etc these will also become a matter of course for the incautious or unknowing.

The solution as history shows is good old “tried and tested” OpSec to keep your “public self” on the radar, whilst also keeping your “private self” off the radar, not even “in the grass” of the noise floor. Such behaviour was once coined “Spy Craft”.

Dirk Praet March 26, 2016 10:16 AM

@ Rolf Weber

I only say it is technically possible. Good that you seem to agree eventually.

Please be so kind as to not twist my words. No one here has ever said that it isn’t technically possible. The debate is over whether or not it is possible to do it securely. Capisce?

Correct, but it should only show that you need better arguments than just refering to “experts”

Yes, Rolf, we know what you mean. And there’s something deeply troubling about demanding strong technical arguments from other people while at the same time defending your own position with pretty much every known fallacy in the book.

@ Clive, @ Wael

A minor correction “difficult to detect” should be “impossible to detect” in human terms and likewise impossible to bypass within the device.

Let’s assume for argument’s sake that vendor “Alpha” on behalf of government “X” of the country it has its corporate headquarters in succeeds in implementing a bullet proof, undetectable backdoor in a popular secure IM app. Backdoor access is protected by a clever, cryptographically sound key authentication mechanism and granted to LEA “LX1” of country “X”.

That’s all good and dandy, but how does that scale when LEA’s “LX2”, “LX3” and “LX4” start demanding keys too on top of similar requests by the governments of countries “Y” and “Z” for their LEA’s “LY1”, “LY2”, “LY3”, “LZ1”, “LZ2” and “LZ3”? It’s reasonable to assume that the intelligence communities of countries A to Z will not only demand keys for themselves too but will also dedicate significant resources to uncovering the known backdoor subsystem itself in an effort to exploit potential flaws or vulnerabilities possibly not known to the vendor. In no time, organised crime syndicates will try to get their hands on access keys too.

It’s pretty obvious that something like this is going to put an enormous burden on any company and expose it to gigantic liabilities the moment access keys or the backdoor subsystem itself one way or another gets compromised. On top of that, the moment the presence of a backdoor is publically known, the product it was built into will be ditched by any private or public entity even remotely concerned with he confidentiality of its data/communications, which in most cases will also mean the very people targeted with the backdoor in the first place.

So until such a time that someone comes up not just with a technically bullet proof solution but also one that scales and does not impose unreasonable security and financial risks on both vendor and users of the product, there is no way anyone can convince me it can be done in an even “reasonably” secure way.

Figureitout March 26, 2016 11:20 AM

Wael RE: your eval of rolf’s “proposal” (not implementation people!! he’s saying it’s possible to implement securely w/o providing an implementation)
–Nice, yeah, what you describe sounds a lot like what Apple’s done w/ the KDF, and Rolf’s is a crude version not using best practices. It’s unclear technically how the 3 keypairs would work cleanly and securely (just more keys to secure).

And the nightmares set in w/ the unknown effects the backdoor may have, opening trivial holes elsewhere.

And it’s funny he’s targeting the lazy criminals that use mainstream apps willy nilly and expect them to be hidden from authorities. Telling of his background (perhaps it’s his laziness) that he goes after the retard bottom-of-the-barrel opportunistic criminals that cycle in-and-out of jail, not the sophisticated ones running rings around these people (and they need to be touched, like the guys hacking federal reserve banks!! lol).

Clive Robinson March 26, 2016 11:28 AM

@ Dirk Praet,

That’s all good and dandy, but how does that scale

We know it does not scale but… That’s the aim of NOBUS (nobody but us).

Only two things can happen. The first is LX1 never tells anybody about the backdoor, or uses information gained from it. The second is every LEA gets to know about it and eventually LXn will leak information such as the key to the public in some way then it’s EBIUS (everybody including us).

The thing about NOBUS is it’s a “busted flush” these days. It only works when people don’t think it exists and the LEA takes extrodinary measures to protect the secret. This was the reputed case with Ultra during WWII, but Malcom Mugeridge and others let things slip which led in 73 to the Winterbottom book and finaly to Gordon Welchman’s “hut six”.

But unlike the Germans who believed Enigma to be secure –which it was,– but was used insecurely, thus did not actually test the belief. These days sufficient people know that all mathmatical encryption algorithms can be “backdoored” and thus will if sensible mitigate or test.

There is only so far an attacker can go with hiding NOBUS before the secret leaks. That is those who suspect NOBUS can lay down a false trail that the attacker has no choice but to respond to (think message about drone delivered dirty bomb on US Presidential inauguration given just a short time before etc). If the attackers swallow the bait and take any kind of action then a NOBUS backdoor is confirmed.

For obvious reasons testing for NOBUS is dangerous –but managable– thus it’s better to assume it exists and mitigate, ie assume the channel is open and move the end points with a One Time code or cipher.

But as you note not only do NOBUS backdoors not scale, public knowledge will cause people to migrate to other solutions or mitigation if other solutions are equally as suspect.

Which brings us to your final comment,

So until such a time that someone comes up not just with a technically bullet proof solution but also one that scales and does not impose unreasonable security and financial risks on both vendor and users of the product, there is no way anyone can convince me it can be done in an even “reasonably” secure way.

Without going into why not, I’ll say that scalability is impractical beyond a very very small number of distinct keys. In part because it’s like the physical lock “master key” issue, each additional key or lock weakens the backdoor security in an exponential way. Thus a NOBUS backdoor is never going to be secure.

Likewise Rolf Weber’s “shifting sands” backdoor that is not a backdoor but a hardware bypass on the physical device, as Wael has pointed out the knowledge of it’s existance will without any doubt get out irrespective of what hoops the LEA and manufacturer go through. We have seen this with the FBI / Harris Corp and stingrays.

Now “the cat is out of the bag” with NIST publicaly ditching the NSA EC DRNG due to a suspect backdoor, people with sense will migrate or mitigate. The cost of this is that the “going dark” problem LEA’s bleat about will now happen and badly, NIBOMS (no if’s, but’s or maybe’s).

The only question now is how badly will it happen and at what pace?

Dirk Praet March 26, 2016 11:47 AM

@ Niko

Only a small subset of software is approved for official government communications. It’s not clear the USG would worry about software that they don’t use.

Even if not used in the public sector, the private sector would still be affected. As in economic espionage and stuff. That should have them equally worried.

US export regulations seem designed to prevent software from being sold that is too cryptographically strong rather than too weak.

Why would anyone ever buy a cryptographically weakened and insecure version of a product and why would a company sell one if there is no explicite statute or export prohibition for selling the full version in the destination country? That’s a scenario that not only brings back the Clipper Chip, but also pre-2000 encryption export policies. I strongly believe this really is the best way to ruin the US tech industry.

Wael March 26, 2016 12:16 PM

@Clive Robinson,

he solution as history shows is good old “tried and tested” OpSec to keep your “public self” on the radar, whilst also keeping your “private self” off the radar, not even “in the grass” of the noise floor. Such behaviour was once coined “Spy Craft”.

Yes! Avoid being a target!

Wael March 26, 2016 12:22 PM

@Dirk Praet, @Clive Robinson,

That’s all good and dandy, but how does that scale when LEA’s “LX2”, “LX3” and “LX4” start demanding keys too on top of similar requests by the governments of countries “Y” and “Z” for their LEA’s “LY1”, “LY2”, “LY3”, “LZ1”, “LZ2” and “LZ3”

One could use something like OAuth 2.0. The problem is who is the real owner that grants access to the mobile-resident spyware ? Still doesn’t solve many of the issues.

Wael March 26, 2016 12:33 PM

@Figureitout,

And it’s funny he’s targeting the lazy criminals that use mainstream apps willy nilly and expect them to be hidden from authorities.

The thing I can’t wrap my head around in the proposal is this: Install a backdoor mechanism, however it may look like. Users who don’t trust the device can bypass the backdoor, through various mechanisms by:

  • Using Custom ROMs
  • Using third party encryption tools

Without going into the technical details, even at the Conceptual Architecture level: how is this a valid solution?

Sancho_P March 26, 2016 4:44 PM

Before discussing unsound technical “solutions” we should see that there is no technical problem to solve.

Some people want to have access, some people may grant access.
Others don’t.
Liberty is to have the choice.
Those who want to grant access may engrave their pwd onto the back of the device.
No technics, no bugs.

  • Be aware that access wouldn’t mean read only.
    Undetectable access to one’s home / safe / memory / brain means any “evidence” found there would be rendered null and void.
  • Be aware that “read” (or listen) is different from “understand”.
    What is encrypted, is it when you can’t read or when you can’t understand?
    Even for a bright “reader”, an OTP could transform “plaintext” to the opposite plaintext. Content without context often is worthless.

The ethical question is first:
Would we demand to access + examine a suspect’s brain?

Dirk Praet March 26, 2016 5:10 PM

@ Clive

For obvious reasons testing for NOBUS is dangerous –but managable– thus it’s better to assume it exists and mitigate, ie assume the channel is open and move the end points with a One Time code or cipher.

If the new mandated decryption bill of senators Burr and Feinstein passes, then all plausable deniability is also out of the window.

@ Wael, @ Figureitout, @ Clive

Without going into the technical details, even at the Conceptual Architecture level: how is this a valid solution?

It isn’t.

One could use something like OAuth 2.0.

Preferably not. It has already had numerous security flaws exposed in implementations and has been described as inherently insecure by quite some folks including a primary contributor to the specification who stated that implementation mistakes are almost inevitable.

The problem is who is the real owner that grants access to the mobile-resident spyware ? Still doesn’t solve many of the issues.

I suppose it would be the vendor, granting access to LEA “LXn” either on a permanent or case-by-case basis upon presentation of a valid warrant. It’s reasonable to assume that LE would push for the former as to avoid unpleasant waiting times while vendor Alpha’s legal department is vetting the (many) request(s). The following question is where interception and decryption take place. At the vendor’s premises? Or would every LEA have it’s own private backdoor interface ? There’s a couple of very interesting scenarios here which I leave to @Rolf Weber to think through.

Niko March 26, 2016 5:25 PM

@dirk

I think you forgot what started this line of comments. The question was what if some authoritarian regime, say China, asked a US company for cryptographically weak software so that they could spy on dissidents or their own citizens. The US company would sell the software to comply with the destination country’s laws and not to be excluded from that market.

Clive Robinson March 26, 2016 8:31 PM

@ Dirk,

If the new mandated decryption bill of senators Burr and Feinstein passes, then all plausable deniability is also out of the window.

I’ve not seen anything concrete about Burr & Feinstein’s musings about the way they think the world should be, so rational debate is in effect currently not possible.

However if past performance is anything to go by anything with Feinstein’s ink on it is not likely to be all together rational 🙁

In British history there is a story about King Kunute (various spellings alert) who got tired of various sycophants in his court going on about his supposed “absolute power”. He had them carry him in his throne down to the water line at low tide and repeatedly told the tide to turn. The tide came in unabated and the sycophants got their expensive shoes and robes soaked / ruined by the sea water as an object example of the limitations of the “absolute power” of man irrespective of rank or privilege.

As I’ve recently commented about Obama’s recent stupidity over Silicon Valley should accomodate the whims of James Comey, “the laws of man” do not counter either “the laws of nature” or “the laws of mathmatics” no matter how much he or those others in power would wish otherwise.

As was once observed, when it comes to nature as far as the Earth is concerned mankind is nothing but a mildly annoying skin complaint, that could easily be wiped away. Likewise we know that as far as the solar system is concerned the Earth is just minutes away from a solar ejector that would sterilize the Earths surface should one head directly towards us. Also at some point the Earth will be first broiled then consumed as the Sun runs down unless one of a myriad of catastrophies happens first… And politicos still don’t get that powerless as they are to stop any of that, as far as we currently know good crypto will outlive all of it, keeping our secrets long past the point Earth will once again be stardust. And importantly no amount of coercion on Silicon Valley or Service Providers will stop those who have the brains to, from using it…

I can easily see some “tech wizz kid” coming up with a little “overlay” device that you put on the front of your smart phone, that converts your simple plain text taping on it’s surface to OTP encrypted taps on the smart phone screen, and likewise convert OTP encrypted text on the smartphone screen back to plaintext on it’s display… We know how it could be done, somebody just has to design and manufacture the parts and final product in a sufficiently low cost way to make it viable.

The reality is that the first little encryptor device is more likely to be a contactless Bluetooth / NFC device using the “external keyboard” etc drivers already built into Smart Phones. And that it could be built to a retail price point of around 50USD.

Figureitout March 27, 2016 12:23 AM

Wael
how is this a valid solution?
–I don’t know, and I won’t expend a lot of energy trying to find out. You don’t either eh? It’s not worth it. Just got the hardest project ever dropped on me lol, f*ck. Don’t have time for bs like incomplete “secure” backdoor proposals. We need to make sure it’s easy migrate away from backdoored systems (the SoC engineers, any vendor, the ball’s in their court now; as cool and useful as they are, they’re a huge risk as there’s too much packed in the chips) if companies decide to do it (or get enslaved to); we have a responsibility to let public know our hands may be tied and ideally build secure/backdoor-free systems so far as we can tell. Brings shame to our profession.

Dirk Praet
–Broken link mate: http://homakov.blogspot.co.uk/2013/02/hacking-facebook-with-oauth2-and-chrome.html

Clive Robinson
little “overlay” device that you put on the front of your smart phone, that converts your simple plain text taping on it’s surface to OTP
–Depending on the chip you use, this would probably not be that bad (lot of the work’s done for you). Can tweak the gain setting to get clearance well off the screen (no fingerprints on phone screen, and encryption off device then passed down); probably most challenging is the interface on the untrustworthy side (how is decryption done on smartphone, how to get data to and fro the cpu securely etc.).

Neat product idea, if someone bites, wish implementation would be approachable (super clean code at least) and not get all, you know, “fashionista” w/ it like so many BT/NFC and smartphone products are. Just want it to work fast and discrete.

Dirk Praet March 27, 2016 11:12 AM

@ Niko

The question was what if some authoritarian regime, say China, asked a US company for cryptographically weak software so that they could spy on dissidents or their own citizens.

Which I argued would be harder to fight if the US itself already had mandatory backdoor legislation in place.

PT Derringer March 27, 2016 1:38 PM

@Clive Robinson
“Public self” and “private self” smell of activism, dissidents and all things related. Is there anything to do for people who aren’t necessarily dissidents and haven’t done anything outright criminal but may be unsatisfied with being tracked and listened to?

Should we start adopting criminal and spook tactics?

Niko March 27, 2016 3:18 PM

@dirk

To recap, you should it would it harder for US companies to deny other countries back doors and challenge those requests in foreign courts, based on an unreasonable burden argument. I raised the point you don’t even know if there is a “reasonable burden” standard in foreign court cases, and in any case, it’s sheer nonsense to suggest that Chinese judges are going to reach any ruling contrary to what the Politburo decides in a case of that magnitude, especially on issues of Chinese national security and internal stability. You raised the point that the USG could diplomatically lobby foreign governments to backdown, which is true, but I questioned whether they would actually do that. You claimed they would lobby against backdoors which could enable economic espionage against the US. However, since Microsoft, for instance, could produce a Chinese Windows, a Russian Windows, and a US windows, a vulnerability in one wouldn’t be a vulnerability in the other and the economic espionage argument falls away. That means it is a big assumption on your part that the USG is engaging in diplomacy to get countries not to ask US companies for backdoors. There’s not anything else I can add.

Clive Robinson March 27, 2016 6:09 PM

@ PT Derringer,

Should we start adopting criminal and spook tactics?

It is a mistake to set such behaviour in negative terms of “criminal and spook”. Whilst the behaviours such words invoke are considered some of the oldest professions, they stole the behaviour from a far older profession, one without the negative connotations.

Think instead of the acient arts of storytelling, acting and singing to pass wisdom and cultural knowledge down the generations long prior to other methods such as inscribing in stone and clay or on papyrus first pictograms then words.

Such public and private presentations have been practiced by artists down the eons of mankinds existance. And that is how you should view such behaviours with the positivity of cultural inheritance, against tempory aberrant behaviour of authoritarians.

And as such yes we should practice and celebrate these behaviours as part of our history and culture, not something to be cast aside on the demands of self apointed authoritarians who have in no way earned the respect or trust of those they seek to manipulate and subjugate.

All western cultures have cultural memories of those who stood up to authoritarian behaviours, and changed society for the better, in part it was the arts and their associated behaviours that enabled their successes and we should not forget that.

For instance it was a hundred year ago to the day that in Ireland an uprising was held, it was quite brutally repressed. But the events leading up to it and that followed gave rise to the Irish Republic of today. A look at Irish history from well before Oliver Cromwell can be seen in the songs and poetry passed down the generations at times in public but more often in private.

A lesson from history we would do well to not just remember, but seek to emulate now and in the future, so that we still have a future for our children and grandchildren.

Dirk Praet March 27, 2016 6:21 PM

@ Niko

it would make it harder for US companies to deny other countries back doors and challenge those requests in foreign courts, based on an unreasonable burden argument.

Not just on unreasonable burden, but on domestic statutes that could be argued to conflict with such requests and need to be resolved too. I refer to the ongoing Safe Harbour/Privacy Shield limbo between the US and the EU.

However, since Microsoft, for instance, could produce a Chinese Windows, a Russian Windows, and a US windows, a vulnerability in one wouldn’t be a vulnerability in the other and the economic espionage argument falls away.

Yes, technically they could do that, which lands us right back in the backdoor scaling and management discussion.

PT Derringer March 28, 2016 10:12 AM

@Clive Robinson
I meant no offense. Just that the common guy/gal is unlikely to be proficient in the discipline discussed–unless his/her cover is so good.

Your perspective is interesting. I guess I never thought about it that way.

Rolf Weber March 29, 2016 2:58 AM

[Sorry for the reply delay, I was mostly offline over Easter]

@Wael


If the mechanism requires the manufacturer’s assistance and requires the device to be physically present, then it isn’t a backdoor. If the mechanism allows a non-owner to remotely gain access (real-time or delayed), then it qualifies as a backdoor.

So my Apple proposal isn’t a backdoor while the WhatsApp is? 🙂


This is a manifest weakness. You almost never want to save a clear text or an encrypted representation of a passphrase or a PIN either on the device or on a server. Storing a hash maybe ok. The preferred method is to to use a KDF function based on a passphrase (PBKDF2, for instance).

To store a hash will not help, with a hash you cannot unlock the phone. Instead of storing the encrypted PIN it would also be a possibility to store the filesystem key encrypted, but that wouldn’t change much.

And what attacks do you see against my approach? To decrypt the PIN, there are 3 different keys needed: The HW “master key” and 2 per-device keys (one only available on the phone itself).


Slight correction: they have resulted in exploits.

Everything we do may result in exploits. The point is that “backdoors” do not necessarily result in exploits.


You don’t think the bad guys will do that, or are you assuming they don’t read English on this blog and other “how to” sites?

Some may. But you cannot generalize from some to all. It seems that currently a lot of criminals use phones that are unlockable. Or why do you think the FBI seizes and unlocks phones? Because it’s funny pastime?


However, if the backdoor is sitting really deep in the stack, even a custom ROM won’t bypass it.

My proposed “backddors” would be well known. Everybody would know that phones with default software are unlockable by the manufacturers, and everybody would know that WhatsApp is capable to intercept messages. Just like it is already the case for eg Skype.

Rolf Weber March 29, 2016 3:07 AM

@Dirk Praet


That’s all good and dandy, but how does that scale when LEA’s “LX2”, “LX3” and “LX4” start demanding keys too on top of similar requests by the governments of countries “Y” and “Z” for their LEA’s “LY1”, “LY2”, “LY3”, “LZ1”, “LZ2” and “LZ3”?

Again: My proposal isn’t designed to give law enforcement keys or any kind of “direct access”. My proposal is that the service provider must be able to circumvent his own encryption.

Dirk Praet March 29, 2016 4:59 AM

@ Rolf Weber

Again: My proposal isn’t designed to give law enforcement keys or any kind of “direct access”. My proposal is that the service provider must be able to circumvent his own encryption.

Please explain how this works in practice for say, an IM conversation agency LXn wants to track in real time. Are you suggesting that the vendor at their facilities will record and decrypt all communications of persons under an LE warrant, store the logs and then send them to LXn? How is that a practical approach to prevent or stop a crime in progress?

And just how many resources and staff do you think a company would require to accomodate in a timely manner a constant stream of valid LE requests? How does this solution scale?

It seems that currently a lot of criminals use phones that are unlockable.

Because they don’t know they are or just don’t care. Watch this change after the recent Apple v. FBI sh*t storm and outcome.

The point is that “backdoors” do not necessarily result in exploits.

Wishful thinking. Once there is a known backdoor, people will eventually research, break and exploit it.

My proposed “backddors” would be well known. Everybody would know that phones with default software are unlockable by the manufacturers, and everybody would know that WhatsApp is capable to intercept messages.

As @Wael already asked: even from a conceptual angle, how is this for LE in any way a valid solution if it works for the dumbest of criminals only, and in the case of an IM without real time interception?

Rolf Weber March 29, 2016 6:52 AM

@Dirk Praet


Please explain how this works in practice for say, an IM conversation agency LXn wants to track in real time. Are you suggesting that the vendor at their facilities will record and decrypt all communications of persons under an LE warrant, store the logs and then send them to LXn? How is that a practical approach to prevent or stop a crime in progress?

Like it is practiced already today with “traditional” monitoring. Agency LXn serves the company a warrant, and if it is lawful the company has to comply.
And what should be the difficulty with real time access? The messages or the streams are copied and the agancy is granted access to these copies.


And just how many resources and staff do you think a company would require to accomodate in a timely manner a constant stream of valid LE requests? How does this solution scale?

I don’t know, but it seems feasible, at least Skype seems to be able to handle it.


Because they don’t know they are or just don’t care. Watch this change after the recent Apple v. FBI sh*t storm and outcome.

Criminals will always do mistakes. Criminals who don’t do mistakes are not catched, but most of them are.


Wishful thinking. Once there is a known backdoor, people will eventually research, break and exploit it.

You simply cannot prove this claim. You only have some exceptional examples where this happened (like Vodafone Greece), but even in these examples it is not clear if the backdoor was really necessary for the break-in.


As @Wael already asked: even from a conceptual angle, how is this for LE in any way a valid solution if it works for the dumbest of criminals only, and in the case of an IM without real time interception?

I think I answered these questions above in this post.

Dirk Praet March 29, 2016 11:10 AM

@ Rolf Weber

Like it is practiced already today with “traditional” monitoring. … The messages or the streams are copied and the agancy is granted access to these copies.

That’s not how it happens in “traditional” monitoring. When LE wants to monitor someone’s phone conversations, they get a warrant to tap into the central switching networks in the phone system which allows them to easily listen in without being detected. They also have access to the stations that relay mobile phone calls, which lets them eavesdrop on wireless communications. That’s a direct access system, not one in which the phone company somehow copies, logs or relays conversations.

I don’t know, but it seems feasible, at least Skype seems to be able to handle it.

“I don’t know” is not a valid answer when presenting any type of solution. No one except M/S and US IC/LE know exactly how Skype monitoring is being done today. If past methods are to provide any clue, the logical approach would be similar to a phone tap. The method you are describing sounds counter-intuitive, tedious, inefficient and resource intensive.

As an engineer, surely you must see that too. I think you just don’t want to go there because in doing so you would be casting serious doubts on your own “no direct access” mantra.

You simply cannot prove this claim.

The NSA’s ANT catalog is about 50 pages of previously unknown backdoors that were eventually found out about. And there’s not only Vodaphone Greece: Juniper, DUAL EC DRBG, Sercomm DSL routers, the PGP full-disk encryption backdoor, backdoors in pirated copies of commercial WordPress plug-ins, the Joomla plug-in backdoor, ProFTPD, the Borland Interbase backdoor, the 2003 Linux backdoor, BackOrifice, the tcpdump backdoor, the suspicious _NSAKEY in NT4 SP5 etc. etc.

Whether or not these backdoors were introduced by product vendors, state actors or other attackers doesn’t make any difference from a security or risk management perspective. We may not be able to empirically prove that every backdoor out there has or ever will be found out about and exploited, but from what we know there statistically is a more than reasonable chance that eventually many, if not most, will be. Conversely, you cannot prove that there is even one out there that hasn’t been compromised and which, funny enough, would actually disprove your own claim.

The simple fact of the matter is that introducing backdoors is like dealing drugs: the question is not if you will be caught but when you will be caught. Especially when you go tell the world you are a dealer.

Wael March 30, 2016 1:04 AM

@Rolf Weber,

Rolf Weber on Possible Government Demand for WhatsApp Backdoor:

[Sorry for the reply delay, I was mostly offline over Easter]

Sorry for the belated “Frohe Ostern” (I cheated)

So my Apple proposal isn’t a backdoor while the WhatsApp is? 🙂

It would so seem!

To store a hash will not help, with a hash you cannot unlock the phone.

Depends on the implementation. I wasn’t proposing using the hash to unlock a phone, though.

Instead of storing the encrypted PIN it would also be a possibility to store the filesystem key encrypted, but that wouldn’t change much.

Ok.

And what attacks do you see against my approach? To decrypt the PIN, there are 3 different keys needed: The HW “master key” and 2 per-device keys (one only available on the phone itself).

Define the problem statement, your goal of the proposal (separate the two proposals) and show that your suggestion accomplishes your goal. From what we’ve seen so far, you’ll catch petty criminals. The hard core ones will not be caught using this mechanism.

Everything we do may result in exploits. The point is that “backdoors” do *not* *necessarily* result in exploits.

Not all exploits are created equal or are in the same class. This, again, is the point about other dimensions you are downplaying. Exploits could be technical, political, or ethical.

Some may. But you cannot generalize from some to all. It seems that currently a lot of criminals use phones that are unlockable.

This is a subjective statement.

Or why do you think the FBI seizes and unlocks phones? Because it’s funny pastime?

Tickle me, I’m trying to laugh 😉

My proposed “backddors” would be well known. Everybody would know that phones with default software are unlockable by the manufacturers, and everybody would know that WhatsApp is capable to intercept messages. Just like it is already the case for eg Skype.

What’s the effect of this knowledge on behavior? Any security savvy person will operate under this assumption!

Wael March 30, 2016 1:19 AM

@Rolf Weber,

Deliberate double post to correct formatting… Been a long day.

[Sorry for the reply delay, I was mostly offline over Easter]

Sorry for the belated “Frohe Ostern” (I cheated)

So my Apple proposal isn’t a backdoor while the WhatsApp is? 🙂

It would so seem!

To store a hash will not help, with a hash you cannot unlock the phone.

Depends on the implementation. I wasn’t proposing using the hash to unlock a phone, though.

Instead of storing the encrypted PIN it would also be a possibility to store the filesystem key encrypted, but that wouldn’t change much.

Ok.

And what attacks do you see against my approach? To decrypt the PIN, there are 3 different keys needed: The HW “master key” and 2 per-device keys (one only available on the phone itself).

Define the problem statement, your goal of the proposal (separate the two proposals) and show that your suggestion accomplishes your goal. From what we’ve seen so far, you’ll catch petty criminals. The hard core ones will not be caught using this mechanism.

Everything we do may result in exploits. The point is that “backdoors” do *not* *necessarily* result in exploits.

Not all exploits are created equal or are in the same class. This, again, is the point about other dimensions you are downplaying. Exploits could be technical, political, or ethical.

Some may. But you cannot generalize from some to all. It seems that currently a lot of criminals use phones that are unlockable.

This is a subjective statement.

Or why do you think the FBI seizes and unlocks phones? Because it’s funny pastime?

Tickle me, I’m trying to laugh 😉

My proposed “backddors” would be well known. Everybody would know that phones with default software are unlockable by the manufacturers, and everybody would know that WhatsApp is capable to intercept messages. Just like it is already the case for eg Skype.

What’s the effect of this knowledge on behavior? Any security savvy person will operate under this assumption!

Rolf Weber March 30, 2016 3:01 AM

@Dirk Praet


That’s a direct access system, not one in which the phone company somehow copies, logs or relays conversations.

Yes, but this doesn’t work this way in packet-switched networks like the internet, at least not if the goal is to avoid that the government receives data from uninvolved people. So either it is implemented as a “direct access”, but with a box in between that filters out everything but the target, or you copy the selected data and provide it on another system. Because in cases like WhatsApp or Skype a seperate decrypting process is needed, I’d clearly prefer the copying.
But these are only necessary, additional technical steps, that don’t change much on the nature of the monitoring — it is still comparable to “traditional” surveillance.


The method you are describing [regarding Skype’s “backdoor”] sounds counter-intuitive, tedious, inefficient and resource intensive.

According to Wikipedia, Skype’s “backdoor” has pretty much the same design as my proposal:

“This is implemented through switching the Skype client for a particular user account from the client side encryption to the server side encryption, allowing dissemination of an unencrypted data stream.”


As an engineer, surely you must see that too. I think you just don’t want to go there because in doing so you would be casting serious doubts on your own “no direct access” mantra.

As an engineer, I simply know that it is implementable without providing any kind of what I would call a “direct access”.

@Wael


Sorry for the belated “Frohe Ostern” (I cheated)

Trotzdem danke. 🙂


Define the problem statement, your goal of the proposal (separate the two proposals) and show that your suggestion accomplishes your goal. From what we’ve seen so far, you’ll catch petty criminals. The hard core ones will not be caught using this mechanism.

Yes, of course. You should understand that I agree with you guys that there is no reasonable way to make all encryption available to law enforcement. Strong crypto is out of the tube, that’s a matter of fact. So yes, my suggestion will only apply to criminals who use mass products with default settings, and who are too stupid, lazy or careless to use other means of encryption.


This, again, is the point about other dimensions you are downplaying. Exploits could be technical, political, or ethical.

It is not the job of us techies to solve political or ethical problems (whether they exist at all or not). Our job is to give factual correct answers to technical questions. And “there is no secure backdoor” is simply not the correct answer. The correct answer would be “we can implement reasonable secure backdoors, but consider this and this and this …”. And then the majority could decide, like it is usual in democratic countries.

What bothers me most with this kind of discussion is that a small technical elite lies about technical facts in order to push their own political agenda. They abuse that very little people (including politicians and judges) understand the technology.

(But luckily this will change with the next generations)


What’s the effect of this knowledge on behavior? Any security savvy person will operate under this assumption!

Maybe. But OPSEC is the hardest part …

Dir Praet March 30, 2016 7:24 AM

@ Rolf Weber

According to Wikipedia, Skype’s “backdoor” has pretty much the same design as my proposal

The unsubstantiated, one-line Wikipedia description of the Skype interception method can hardly be called conclusive evidence of how this “dissemination of an unencrypted data stream” in practice would look like. There’s no denying that, technically, and from an engineering vantage, it could be done in the way you’re proposing, but which, for all practical purposes, would not be the most efficient solution for either vendor or LE. Especially because more practical methods can be implemented, and, in my opinion, have been.

Our job is to give factual correct answers to technical questions.

Probably the first time we ever agree on something. And the fact of the matter is that so far you have failed to come up with a solution that is conceptually and technically sound, secure, scalable and passes scrutiny by your peers. But don’t feel too bad about it. Neither has anyone else.

It is not the job of us techies to solve political or ethical problems

Right again. That’s everyone’s job.

Sancho_P March 30, 2016 5:01 PM

@Rolf Weber

Oh please!
Don’t omit the term “reasonable”, I thought you already got it:
”… while I am in favour of reasonable secure “backdoors”.”
[@Rolf Weber, this thread, March 24, 3:09 AM]

Secure backdoor: Impossible.
Reasonable secure backdoor: Crazy.

Buck March 30, 2016 9:03 PM

In which @Rolf Weber accidentally argues against his own ‘Snowden myth’ myth:

So either it is implemented as a “direct access”, but with a box in between that filters out everything but the target, or you copy the selected data and provide it on another system. Because in cases like WhatsApp or Skype a seperate decrypting process is needed, I’d clearly prefer the copying.
But these are only necessary, additional technical steps, that don’t change much on the nature of the monitoring — it is still comparable to “traditional” surveillance.

So, it’s not “direct access” — we’ll just “copy the selected data” about whomever we want to the PRISM interface. But these are only necessary, additional technical steps, that don’t change much on the nature of the monitoring… LOL! Thanks Rolf, I needed that laugh 😛

@all (am I using this correctly, or is it implied already):

I’m wondering what the folks here would think about this sort of compromise:

Approach #2: Options and Notice

Here’s an approach that I haven’t seen proposed anywhere, one based on consumer options and transparency: What if Congress required that encrypted services (storage or communications services or both) have what one might call an “Emergency Access Mode” as an option available to consumers?

Wael March 30, 2016 11:53 PM

@Buck,

all (am I using this correctly, or is it implied already):

Are you messing with me? Is the moon full already? Lol! You are using it correctly 🙂

Buck March 31, 2016 12:09 AM

@Wael

Yeah, I was messing with you a bit 😉 I can’t see it, so I don’t think it is yet!

Clive Robinson March 31, 2016 12:32 AM

@ Buck,

I’m wondering what the folks here would think about this sort of compromise:

Approach #2: Options and Notice

It’s not a good idea, because of the sting in the tail of a register of “Opt Outs”. In effect it’s an “unwarranted list of suspects” for whom there is no “articulable suspicion”. It’s like saying “You draw your curtains, therefore I can assume you have a ATAP factory or arms cache in your front room”.

The problem with the article certainly in #2, #4 & #5 is the author has not thought it through enough.

The author distinquishs between “data in transit” and “data at rest” but do not distinquish between “plaintext data” and “ciphertext data” which is a very important point the likes of legislators need hammered into their heads. Especially DF, who for all her supposed work with the “Spooks R Us” boys club committee, repeatedly demonstrates a lack / disregard on fundemental points.

As anyone who has been following the DOJ/FBI -v- Apple debate should know the important thing is “what ‘plaintext data’ you can get access to” not “What you can make a third party do via legislation”. That is if the user moves the crypto end point beyond the devices reach being able to put every HiTec third party up against the wall is not going to get you the “Plaintext” only the “ciphertext”…

Legislators who know everything about law but nothing about the subject at hand are at the prompting of James Comey and Co are going to get the legislation wrong.

And it’s in Comey and Co’s interests that they do get it wrong. To see why you have to understand the next issue the difference between codes and ciphers and why it’s important.

If Comey and Co get their way the legislation will have way too wide a scope, thus the third party companies will be forced into preventing users moving the cipher end points beyond the phone etc. With ciphers and stego this is in theory possible, with plaintext One Time Codes and many other codes it is not. But Comey does not care, “he gets his man” any man will do, be it the actual criminal or exec of a third party company, as long as Comey can hang somebody out to dry then “Justice has been ‘seen’ to be done”, which makes him look good irrespective of if “Justice has ‘actually’ been done”.

The issue is that ciphers and codes work differently. Ciphers take any input and make the output look as random as possible so that the plaintext in can not be linked to the ciphertext out in various ways except by length. Codes on the other hand are simple substitution tables that have limited size thus deal with only certain predetermind messages at the input, however the only requirment for the codetext output is that it is uniquely determinable by the recipient. Thus it does not have to be of any given size nor does it have to be incomprehensible in any way.

The ultimate form of this is the One Time Code phrase as seen in use by the SOE via the BBC during WWII with their “And now some messages for our friends…”. Agents would memorise three or four short phrases such as “The lark flies high and far in the spring” which whilst fully intelligible as a message has no discernable meaning other than between the agent and their handling officer at HQ. It might for instance mean “air drop is on” or “blow up target X” or anything else, such as “D-Day scheduled within two weeks”.

The point is whilst a “spell checker” could be made to detect the use of a ciphertext, a suitible codetext would go through without issue. Thus the level of technical measure a third party manufacturer could put in their product to stop the use of codes will always be insufficient.

Whilst the simple use of ciphers can be fairly easily detected, using either Stego or a Code on the ciphertext to disguise it will make detecting it as hard as detecting a code.

Thus you can demonstrate that what Comey & Co “publicaly claim” they want can never be delivered… Thus you have to assume the claims are a front for something else, and I suspect that is a way to more power / resources / less acountability.

Buck April 2, 2016 3:44 PM

@Clive Robinson

You brought up many of the same points I was thinking about when I posted the other day. I think we’re mostly in agreement here, but please allow me to elucidate further!

It’s not a good idea, because of the sting in the tail of a register of “Opt Outs”. In effect it’s an “unwarranted list of suspects” for whom there is no “articulable suspicion”.

I’d be willing to bet that the existing ‘do not surveil’ lists are already quite valuable, especially when cross-referenced with other stolen datasets (Anthem, Ashley Madison, AT&T, OPM, Target, Verizon, etc.)

That is if the user moves the crypto end point beyond the devices reach being able to put every HiTec third party up against the wall is not going to get you the “Plaintext” only the “ciphertext”…

Of course, but, I’m not entirely convinced that the DOJ/FBI are really incentivized to care about this at all. It seems like you (at least somewhat) agree with me, when you say:

But Comey does not care, “he gets his man” any man will do, be it the actual criminal or exec of a third party company, as long as Comey can hang somebody out to dry then “Justice has been ‘seen’ to be done”, which makes him look good irrespective of if “Justice has ‘actually’ been done”.

For them, it doesn’t matter one way or another if any major criminal activity is happening outside of their scope, as long as arrests and convictions are made. For the more high profile cases, it would appear that they could easily obtain the support of SIGINT agencies…

The problem with the article certainly in #2, #4 & #5 is the author has not thought it through enough.

1 seems to be the obvious solution to us. Is it likely to happen anytime soon? That’s too hard for me to say amidst all the propaganda going around at the moment… As for #3, well Benjamin Wittes writes:

Of course, the nature of the obligation would be where the rubber hits the road in this model. But note, that in pursuing this route, Congress is empowered to pick which options to put on or take off the table.

However, I’m inclined to agree with this statement of yours:

Legislators who know everything about law but nothing about the subject at hand are at the prompting of James Comey and Co are going to get the legislation wrong.

The reason I singled out #2, was because it’s basically the status quo, but with a larger marketing budget being devoted towards convincing people to keep using insecure technology. While this strategy is certainly doomed to fail in the short-medium term (too many employers will not allow it; too many criminals will become aware of it very quickly), it does give the appearance of ‘doing something’… For that reason alone, I could see it gaining some traction in the current debate.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.