CONIKS

CONIKS is an new easy-to-use transparent key-management system:

CONIKS is a key management system for end users capable of integration in end-to-end secure communication services. The main idea is that users should not have to worry about managing encryption keys when they want to communicate securely, but they also should not have to trust their secure communication service providers to act in their interest.

Here’s the academic paper. And here’s a good discussion of the protocol and how it works. This is the problem they’re trying to solve:

One of the main challenges to building usable end-to-end encrypted communication tools is key management. Services such as Apple’s iMessage have made encrypted communication available to the masses with an excellent user experience because Apple manages a directory of public keys in a centralized server on behalf of their users. But this also means users have to trust that Apple’s key server won’t be compromised or compelled by hackers or nation-state actors to insert spurious keys to intercept and manipulate users’ encrypted messages. The alternative, and more secure, approach is to have the service provider delegate key management to the users so they aren’t vulnerable to a compromised centralized key server. This is how Google’s End-To-End works right now. But decentralized key management means users must “manually” verify each other’s keys to be sure that the keys they see for one another are valid, a process that several studies have shown to be cumbersome and error-prone for the vast majority of users. So users must make the choice between strong security and great usability.

And this is CONIKS:

In CONIKS, communication service providers (e.g. Google, Apple) run centralized key servers so that users don’t have to worry about encryption keys, but the main difference is CONIKS key servers store the public keys in a tamper-evident directory that is publicly auditable yet privacy-preserving. On a regular basis, CONIKS key servers publish directory summaries, which allow users in the system to verify they are seeing consistent information. To achieve this transparent key management, CONIKS uses various cryptographic mechanisms that leave undeniable evidence if any malicious outsider or insider were to tamper with any key in the directory and present different parties different views of the directory. These consistency checks can be automated and built into the communication apps to minimize user involvement.

Posted on April 6, 2016 at 10:27 AM14 Comments

Comments

Parker April 6, 2016 11:20 AM

I don’t think that in the white paper it is claimed to be “easy-to-use.” Seems like the term was thrown in here. The chief motive, perhaps the only motive, is the struggle against eavesdropping. There are several cautious, limiting statements in the white paper.

Compare this scheme to alternatives. But remain wary of incentives to make the user responsible for key management. Key management doesn’t go away here. Like Spider Oak. They like to say that even they can’t decrypt your files, but in reality there’s a lot more to it than that.

Daniel April 6, 2016 11:44 AM

I personally don’t believe that any true zero-knowledge system is possible and I don’t believe it is worth the effort to try. So long as a third party is involved there is an element of human trust, even if it is nothing more than trusting those who built the zero-knowledge system.

A secret is best kept between two people, provided one of them is dead.

Parker April 6, 2016 12:57 PM

“A secret is best kept between two people, provided one of them is dead.”

That’s great.

Terry Cloth April 6, 2016 1:24 PM

It’s from Benjamin Franklin’s Poor Richard’s Almanac, but in the form “Three may keep a secret if two of them are dead.”

Greg April 6, 2016 3:41 PM

Interestingly Whatsapp has gone some way to solving this problem since turning on end-to-end encryption this week. In every chat with another user is the option to click an info button that displays a QR code and numeric code that you are encouraged to check or scan face to face with your friend.

Either approach is fairly easy for the average user to comprehend. The worry might only be that they don’t fully understand exactly why they should be doing such a check…

Jesse Thompson April 6, 2016 3:42 PM

I’unno, I have yet to use any of these systems yet myself, but I’d imagine a UI like this one to be simple enough.

1: at beginning of the call, app rates call as “not yet secure”. You also have a “is this line clean?” button that simply invalidates the call into a “not yet secure” state again. Interface is colored red; soft, low pitched but psychologically disappointing hum in the background (maybe sobbing, maybe sad song, I’unno) with a sad beep or slightly stronger note or something every now and again to remind you of the insufficient security of the call without materially getting in the way of the call or being anywhere near as annoying as a repeated alarm.. just a sad enough background theme that folks will be prompted not to leave it suffering and get their jobs done.

2: Visual interface clarifies how to secure the line. It’s the standard “read a code to each other” shtick, but instead of numbers or letters we exploit the higher entropy per syllable idea of 1700-3000 easily digestible dictionary words. I’ve always favored my personally groomed list of 2845 for this:
http://lightsecond.com/password_building_block_dict.txt

Each word spoken is far easier to say and far harder to fabricate and FAR far easier to verify on the line than the equivalent 3 and a third numeric digits it would require to match the same level of entropy. Each word could even be paired with a small icon to help the verifier’s brain much more easily digest what is being heard and compare it to what is read from the screen.

I don’t presently know if it’s important or helpful for only one end of the call to read the code to the other, or to use two codes and both ends read one code to the other, but once both callers are satisfied the line is clear the press a “Satisfied now” button to change the interface back to something more happy and fulfilled.

Altogether this approach should be effective to get the people who are most concerned about security to verify the integrity of their connections to begin with (people who don’t care will either press the “satisfied” button right away or else change settings to “I don’t care if my line is clear, leave me alone!”), you’re protected from tricky “dropped call” scenarios because the sad sounds etc pick back up reminding you to reverify, the process feels less like doing taxes or inputting codes into a lock (both can be psychologically intimidating and breed procrastination) and more like some goofy kids game to humor a hyperactive 3 year old and bribe them to leave you alone for a bit.

And it’s very helpful to prevent an interface from feeling intimidating, even to the people who really care, because perceived intimidation leads to the human element failing over time and an increase in mistakes and missed steps.

The fact that people who DON’T care can easily disarm the interface is still helpful to the entire ecosystem because, well they pay the price directly with no meaningful splash damage if they do get MITM’d.. plus there exists no reliable automated way for a third party to know if the players are trying to secure the line or not, and the “satisfied now” button has only client-side effect on the UI, no data is shared with a central server or anything.

To an attacker the effect is: everyone looks e2e secure, and if you attempt an MITM then either you get people who don’t care (which also means they probably are not targets. Or, at least they are on par with people who own door locks but never use them.) or you get people who care enough to follow the procedure, and are not likely to leak sensitive data prior to running the codes.. and noticing that they don’t match, so getting a TON more suspicious and then retrying the call. 😛

Dan April 6, 2016 5:07 PM

@Jim,
CONIKS is a variant of certificate transparency. Visit here to learn more about certificate transparency in general. The result of a implemented certificate transparency system is that any misbehavior on the part of the key logging server results in a cryptographic proof of that misbehavior.

MrC April 6, 2016 5:41 PM

  1. No matter how many times I tell myself it’s “co-niks, my eyes still see “coinks.”

  2. Leaving the whistleblowing protocol for another day is a gigantic cop out. Without that, the whole thing is completely worthless. And it’s not a trivial problem by any means. The macro case of “the provider has published two inconsistent signed STRs” is relatively easy — though I might supplement the auditors with a P2P system for distributing observed STRs, or even just include observed STR data in every message sent. The micro case of “a person claiming to be user X says that a fake key has been added to his binding” has three problems that I can see: First, the lack of a channel for broadcasting this claim to other clients, besides going through the untrusted provider; Second, DOS attacks in which an attacker makes a false whistleblowing claim in order to destroy trust in the legitimate key; and Third, how can the provider responding to a claim know which key is the correct one one to leave in place? The second and third problems both derive from the problem of distinguishing the legitimate user X from an attacker. The micro case has at least three subtypes: (1) “a person claiming to be user X says he got MITMed during account creation, so the original key is fake,” (2) “a person claiming to be user X says that a new fake key was added to his binding,” and (3) “a person claiming to be user X says that a new fake key signed with the stolen old key was added to his binding.” Subtype 2 would not exist but for the authors’ insistence on permitting binding updates without a signature from the existing key. I don’t have the faintest clue how to deal with subtypes 1 and 3.

  3. The system incurs a tremendous complexity cost in order to conceal whether a given username exists. This strikes me as a very bad bargain. It would be much better to cut all of that stuff, expressly warn registering users that their username’s existence will be publicly known, and advise them to use a pseudonymous username if they don’t want the fact they have an account publicly known.

Joseph Bonneau April 6, 2016 7:32 PM

@MrC it would indeed be simpler to drop the VRF layer to provide privacy. Though I don’t think it’s a “tremendous” complexity cost.

As you say, in a new system perhaps users can be told their usernames should be considered public. But CONIKS is designed for retrofit onto an existing service like Gmail. You can’t just tell hundreds of millions users that things are now different and you’re publishing their email address.

Chris April 7, 2016 3:21 AM

Hello

I am no specialist but using a blockchain like ledger to publish/check the Public Key would not facilitate the verification?

Thanks for your thought

Chris

Brian M April 13, 2016 11:48 AM

It’s surprising to me that it’s 2016 and we are only now figuring out the “Auditability” portion of security. It’s one of the central tenants, but only now are we getting it right. We can really thank BitCoin for showing us how to do it right.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.