Enabling Trust by Consensus

Trust is a complex social phenomenon, captured very poorly by the binary nature of Internet trust systems. This paper proposes a social consensus system of trust: "Do You Believe in Tinker Bell? The Social Externalities of Trust," by Khaled Baqer and Ross Anderson.

From the abstract:

Inspired by Tinker Bell, we propose a new approach: a trust service whose power arises directly from the number of users who decide to rely on it. Its power is limited to the provision of a single service, and failures to deliver this service should fairly rapidly become evident. As a proof of concept, we present a privacy-preserving reputation system to enhance quality of service in Tor, or a similar proxy network, with built-in incentives for correct behaviour. Tokens enable a node to interact directly with other nodes and are regulated by a distributed authority. Reputation is directly proportional to the number of tokens a node accumulates. By using blind signatures, we prevent the authority learning which entity has which tokens, so it cannot compromise privacy. Tokens lose value exponentially over time; this negative interest rate discourages hoarding. We demotivate costly system operations using taxes. We propose this reputation system not just as a concrete mechanism for systems requiring robust and privacy-preserving reputation metrics, but also as a thought experiment in how to x the security economics of emergent trust.

Blog post on the paper.

Posted on February 17, 2016 at 5:18 AM • 15 Comments

Comments

RolandFebruary 17, 2016 8:12 AM

The default for most people seems to be: "Trust until you are betrayed (or face a major data loss)." This seems totally backward. Trust should be earned. It is a currency. If it is fiat (as most currencies now are) it will fall, unless the backer has already earned some trust, one way or another.

David AlexanderFebruary 17, 2016 9:23 AM

This isn't a new idea. A group of financial organisations set up a model along these lines. Look at Identrus LLC.

blakeFebruary 17, 2016 9:23 AM

@Gabriel
> Could a bigger cluster of people manipulate the trust level?

This is a social problem that you shouldn't expect to be able to solve with any tech. See: Enron, Lehman Brothers, etc, etc, etc.

BFebruary 17, 2016 10:36 AM

> Could a bigger cluster of people manipulate the trust level?

It depends on how the system works. If we throw out efficiency concerns, we could set up the system so that everyone has to be involved in issuing a new token as payment for service. As long as one person is trustworthy* and assuming there is a way to accurately determine that the service was provided, it would be difficult to gain undeserved tokens. In practice you would probably need something a little less secure for performance reasons, but at least in theory it is possible to defend against such collusion.


(*) There is a well-known result from cryptography researchers in the 80s that you can create a secure multiparty protocol for any polynomial-time function, which remains secure even if all but one of the parties are colluding. Issuing a token in exchange for providing service is the function in this case. There is a huge list of caveats though; one important one is that denial of service will always be possible, and in fact, it is impossible to prevent DoS attacks if a majority is attacking (or even if exactly half are attacking).

wumpusFebruary 17, 2016 11:16 AM

@Roland

This is hardly a new idea. I'm pretty sure that Machiavelli mentioned it in "The Prince" (at the time it was standard procedure to give enemies 'safe conduct' promises and promptly murder them. merely noted that by merely being a Prince, people often put trust in them (ignoring psalm 146) and that the Prince can greatly profit by abusing this.

My guess is that the difference between what a credit bureau does and the suggested "crowdsourced trust" is largely a matter of charging for access to trustworthyness scores and centralized generation of trustworthyness scores (plus whatever ratings/scores/degamming done by the centralized agency). The credit bureau still require crowdsourced data for bills paid/defaulted on.

Another example should be yelp and competitors. Judging by those examples, I'd suspect that the "trust market" is pretty much a lemon market. Getting a trust system off the ground is hard enough, but you would also have to manage those gaming the system while fending well marketed competitors willing to sell "trustworthyness" for a fee (isn't this how the BBB works?).

DanielFebruary 17, 2016 11:29 AM

I question its responsiveness. This blog recently featured discussion of the FBI's deployment of a NIT that lasted two weeks. I'm skeptical, even if that fact were somehow leaked out, that a social system could respond in two weeks to the loss of trust in that server.

In other words, in low accuracy information environments like the dark web it takes time to sort out the difference between false positive and false negatives. Given the short time frames most attackers work within this social solution doesn't pose any real limitation their activities.

In yet other words, the article states that the loss of trust can be achieved "fairly rapidly". But this statement is meaningless unless it is relative terms more rapid than the attacker.

ObserverFebruary 18, 2016 3:09 PM

I am of the opinion Tor, and similar anonymity networks, simply need anonymous simple IPS like systems for detecting hacking. Detect, block. The signatures are not difficult. No reason for there to be identity management of any kind except that which is afforded by the initial connection.

Those abusing these services for hacking is exactly why the end nodes get black listed. No other reason.

Not that this paper may not offer a good system useful for many things. The problem with Tor is blacklisted end nodes, and they are rightly blacklisted because they are used as conduits for hackers.

(Clever hackers simply compromise systems and make their own anonymous proxies. These are script kiddies abusing the system and ruining for everyone.)


BuckFebruary 18, 2016 8:31 PM

I, for one, find myself having a hard time imagining how such a system for consensual trust could continue to function while still preserving anonymity. Please allow me to present a simple thought experiment...

Say TOR for example, has 10 million users, and only 14,159 of them have experienced a breech of trust. Will the large majority of others continue to trust it? Now, what if somewhere around 7,079 or so of those dissenters also happen to be BLM activists, Falun Gong members, or Walmart union supporters?

If a pseudo-anonymous system could somehow detect this anomaly, then what is to stop a similar number of snoopers from destroying the trust in an actually secure system?

If you're answer includes side-channel based deanonymized reports, then I'd suggest looking at @Daniel's concern about responsiveness...

At the very least, it seems to me that a dead-man's trigger would be absolutely essential in any effective proposal. Anyone got any other good ideas?

WaelFebruary 18, 2016 8:57 PM

@Buck,

Oh, poor Buck! No one biting today? Let me help you...

Anyone got any other good ideas?

Nope! :)

BuckFebruary 18, 2016 9:21 PM

@Wael

No one biting today?
Oh, so soon? Too early to say, I would think! To borrow from other threads of thought, the input-filter opacity might be something to more seriously consider... Algorithm opacity is a given for now. And, the output opacity is almost obviously pretty subjective at the moment!

WaelFebruary 18, 2016 9:58 PM

@Buck,

Algorithm opacity...

What a coincidental term collision! You know there is something called Opacity, right? Perhaps it's better to think high level first.

Coyne TibbetsFebruary 21, 2016 12:30 PM

We can tell easily how reliable such a system will be: take a look at the consensus by which politicians are regarded as reputable and trusted by voters.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.