Our New Regimes of Trust

Society runs on trust. Over the millennia, we’ve developed a variety of mechanisms to induce trustworthy behavior in society. These range from a sense of guilt when we cheat, to societal disapproval when we lie, to laws that arrest fraudsters, to door locks and burglar alarms that keep thieves out of our homes. They’re complicated and interrelated, but they tend to keep society humming along.

The information age is transforming our society. We’re shifting from evolved social systems to deliberately created socio-technical systems. Instead of having conversations in offices, we use Facebook. Instead of meeting friends, we IM. We shop online. We let various companies and governments collect comprehensive dossiers on our movements, our friendships, and our interests. We let others censor what we see and read. I could go on for pages.

None of this is news to anyone. But what’s important, and much harder to predict, are the social changes resulting from these technological changes. With the rapid proliferation of computers—both fixed and mobile—computing devices and in-the-cloud processing, new ways of socialization have emerged. Facebook friends are fundamentally different than in-person friends. IM conversations are fundamentally different than voice conversations. Twitter has no pre-Internet analog. More social changes are coming. These social changes affect trust, and trust affects everything.

This isn’t just academic. There has always been a balance in society between the honest and the dishonest, and technology continually upsets that balance. Online banking results in new types of cyberfraud. Facebook posts become evidence in employment and legal disputes. Cell phone location tracking can be used to round up political dissidents. Random blogs and websites become trusted sources, abetting propaganda. Crime has changed: easier impersonation, action at a greater distance, automation, and so on. The more our nation’s infrastructure relies on cyberspace, the more vulnerable we are to cyberattack.

Think of this as a “security gap”: the time lag between when the bad guys figure out how to exploit a new technology and when the good guys figure out how to restore society’s balance.

Critically, the security gap is larger when there’s more technology, and especially in times of rapid technological change. More importantly, it’s larger in times of rapid social change due to the increased use of technology. This is our world today. We don’t know *how* the proliferation of networked, mobile devices will affect the systems we have in place to enable trust, but we do know it *will* affect them.

Trust is as old as our species. It’s something we do naturally, and informally. We don’t trust doctors because we’ve vetted their credentials, but because they sound learned. We don’t trust politicians because we’ve analyzed their positions, but because we generally agree with their political philosophy—or the buzzwords they use. We trust many things because our friends trust them. It’s the same with corporations, government organizations, strangers on the street: this thing that’s critical to society’s smooth functioning occurs largely through intuition and relationship. Unfortunately, these traditional and low-tech mechanisms are increasingly failing us. Understanding how trust is being, and will be, affected—probably not by predicting, but rather by recognizing effects as quickly as possible—and then deliberately creating mechanisms to induce trustworthiness and enable trust, is the only thing that will enable society to adapt.

If there’s anything I’ve learned in all my years working at the intersection of security and technology, it’s that technology is rarely more than a small piece of the solution. People are always the issue and we need to think as broadly as possible about solutions. So while laws are important, they don’t work in isolation. Much of our security comes from the informal mechanisms we’ve evolved over the millennia: systems of morals and reputation.

There will exist new regimes of trust in the information age. They simply must evolve, or society will suffer unpredictably. We have already begun fleshing out such regimes, albeit in an ad hoc manner. It’s time for us to deliberately think about how trust works in the information age, and use legal, social, and technological tools to enable this trust. We might get it right by accident, but it’ll be a long and ugly iterative process getting there if we do.

This essay was originally published in The SciTech Lawyer, Winter/Spring 2013.

Posted on February 12, 2013 at 6:53 AM22 Comments

Comments

Stephan Engberg February 12, 2013 7:54 AM

I honestly think you confuse terms here.

There is trust as in the residual risk, you have to accept to continue a transaction – however accurate your ability to percieve this risk is. This risk can be significantly reduced, if not eliminated, through technology design (trusthworthytness as in fault tolerance, redundancy, isolation etc.).

Then there is trust as in expectations to future behaviour in human systems and motives. This can be reduced through a varity of instruments, but never eliminated and thus do require – justified or not – a leap of faith in order for processes to occur.

Much of our problems is about more or less intentional bad security design in order to establish a position of control (e.g. lock-in, secondary use of data, etc.). I.e. we trust what we shouldnt trust – an do so only because we either misperceive risks or lack alternatives often both.

Peter A. February 12, 2013 8:27 AM

Bruce, you frown upon the cyber- prefix and than you use it yourself, like in “cyberfraud”.

Francois February 12, 2013 10:17 AM

@Stephan Engberg:
Your points are valid but don’t appear to have anything to do with the post.

One of the major points is that modern technology is weakening our intuitive risk assessment abilities (and all the trust/verification mechanisms we’ve used in the past) . It’s got nothing to do with whether the technology is reliable (e.g. redundancy) . Bruce’s point is the opposite: that the weak point is the way we use (or abuse) the technology. As technology changes, our uses of technology change too.

Of course a leap of faith is always required. That is a fundamental postulate of this post. This is a common theme on this blog. Leaps of faith are not only required, they are getting larger and larger – but to many people, they look smaller. We are not reducing the risk in the human trust element, we are increasing it.

Simon February 12, 2013 10:41 AM

I’m not so sure about the ‘people are the problem’ I often hear. Isn’t it a little like saying the reason water is falling from the sky is because it’s raining?
Outstanding essay with a lot to think about. Is it possible technology is getting away from us and good people will never be able to establish new regimes of trust?

Winter February 12, 2013 12:15 PM

@Simon
“I’m not so sure about the ‘people are the problem’ I often hear.”

But trust and security are about people. They are the problem, indeed.

Security is not different from health care automation:
People are the problem because health is about people!

Simon February 12, 2013 12:41 PM

@Winter – I’m not denying that people are part of the problem, but so what? Where does that take anyone? Oh, OK, now that we’ve determined people are the problem let’s get rid of all the people and then the problem(s) will go away? This is a dead end. Of course people are the problem, they’re the problem in UI design, in air bag deployment, in criminal law, in everything. So what’s the point?
This is what IT invariably retreats to everytime a new threat appears “it’s those stupid people, just tell them ‘no’.”

Eric February 12, 2013 12:45 PM

Think of this as a “security gap”: the time lag between when the bad guys figure out how to exploit a new technology and when the good guys figure out how to restore society’s balance.

I think you might agree that it’s more accurately a time lag between clever individuals and plodding large organizations. Who’s good and who’s bad depends on your perspective– Chinese dissidents exploiting the early state firewalls, for example.

MingoV February 12, 2013 6:03 PM

“People are always the issue…”

What I find scary is that the young adult and teen population heavily uses the internet, but many of them are unconcerned about electronic privacy and security. My two college-attending daughters provide numerous anecdotes: Students who post idiotic stories and pictures on social media (where they have hundreds of “friends”), Students who leave web sites open on school lab computers, Students who use one simple password for everything, etc. Many are unconcerned about web trackers, web sites that sell info to spammers (or worse), and Trojan horses via downloads or e-mails. This is like going to a gunfight with nail clippers.

jdgalt February 12, 2013 8:39 PM

What upsets me about this piece is its tone of naive acceptance of notions like “society” and government (and the implied uncritical trust of the people who seek to be called by those labels).

Technology these days is giving governments unprecedented new powers, both directly (in physical capability) and in terms of legal precedent (because judges always seem to answer new questions posed by technology in ways that let their perceived allies, the police, do anything they want). This is already starting to result in a “reign of terror” and must be resisted.

Most local police forces have no business even having SWAT teams, much less tanks and drones. Those that do have them are already starting to use them for trivial reasons.

In short, the police are no longer “us.” And they’re the ones who have changed.

Narasimha Kaushik February 12, 2013 10:37 PM

Very thought provoking Mr Schneier. I too have frequently felt that it is not the system which is at fault, but the actors!..

Stephan Engberg February 13, 2013 3:18 AM

@Francois

Beg to diggger
As Einstein said – You cannot solve problems with the same level of consciousness as created them.

That is – in my view exactly what Bruce is doing here.

Problem is there are several layers of consciousness.

The low-level is perimeter-security thinking, where control is in some server where it cannot be protected. The more perimeter technology, the less secure, we become.

This is driven by bad design and non-legitimate interests as the bad design also establish positions of power and control over external entities.

In my view, Bruce correctly address this as a problem (e.g. security theater).

But at the same time Bruce fail to come up with solutions at he remain in the same mental perception of security as something about protecting systems from users.

This line of thinking fails 100% with cloud, internet of things and semantic web.

Internet/cloud grade security require total isolation, i.e. ensuring controls remain client-side so risks dont accumulate and power concentrate.

Single transactions should be designed so even if alle perimeter security fails, you have built-in recovery through the non-existence of knowledge in the server to scale the attack towards external entities.

This include system owners and system administrators.
See e.g. New Security Models
http://digitaliser.dk/resource/896495
and this
http://sourceforge.net/projects/linksmart/

Trust is something we need to have between people – peer-to-peer and-to-end – to e.g. doctors, friends and judges. Not towards IT systems as that is exactly where we failed learning the lessons of the Digital Age..

Geoff Nicoletti February 13, 2013 7:06 PM

7 major problems of discourse meant when working with intel people…how do you have trust? 7 problems: lying, forgetfulness, ignorance, withheld info, jargon, ambiguity/vagueness, and attitude of the receiver. And they are only the major ones…how about stealing? You give the govt. the key ideas when a US plane is brought down over China…or the basis for creating the Northern Command and someone else gets the thanks. I gave 44 ideas during Y2K and only Koskinen and Sen. Bennett thanked me, even though NSA called me and Mitre, etc. Career people stealing…trust…trust?

Hagen P February 14, 2013 3:53 AM

Twitter has no pre-Internet analog.

Hmm… how about small classified ads?

And about the little notes that people post on ad-boards in shopping centers?

(Much slower, and non-instantaneous, sure.)

Stephan Egnberg February 14, 2013 4:50 PM

Twitter is (self-)marketing – very different from ordinary life under attack by an aggressive infrastructure and bureaucratic structures

ZeroZero February 14, 2013 6:00 PM

Twitter has no pre-Internet analog.

And, I would never trust Twitter with ANYTHING. Everything involving information has an analog.

Stephan Engberg February 18, 2013 1:05 AM

@ Francois

I understand your point about the growing complexity – of course the challange of evalluating risks grows as e.g. networks opens up for an unlimited range of new risks. But that is also a consequence of bad design that fail to logically isolate transactions on top of a open communication network.

See e.g. Linksmart where we had to NAT Ipv6-enabled devices because IPv6 leaks data even with “privacy” features.
http://sourceforge.net/projects/linksmart/

I am adressing the main problem that technology is designed to be creating risks – often deliberately. The main driver is someone wanting to use technology design to control people or process.

Even if unaware of consequences, failure by design is still the main problem over mere bad implementation (reliability).

Clive Robinson February 18, 2013 4:31 AM

@ Stephan Engberg, Francois,

Even if unaware of consequences, failure by design is still the main problem over mere bad implementation (reliability)

Failure by design is one of those nebulous statments which usually says more about the observers viewpoint than it does about the design it’s self.

The design process is usually about achieving a transformation from assumed inputs to desired outputs within certain criteria.

It’s usually done in one of two ways organicaly or by direct synthesis. The later is the usual formal method involving a formal specification etc, whilst the former is the old artisanal “bolt a bit on” approach.

We know from science and engineering that the formal proces can get us from A to B in a releativly simple series of steps in an efficient way (depending on what your chosen meaning of efficient is). Unfortunatly as we know, as many years of system failures have shown us that frequently we don”t realy know where A&B are and the assumptions mean the systems usually fail from very early mistakes.

Thus we have moved away from traditional engineering design methodologies to more artisanal techniques of “patterns” “Rapid Prototyping” and similar and as these are largely inefficient we tend to have “code reuse” hiding in there to make life interesting as well as lots and lots of abstraction.

This gives us lots of code quickly but nobody realy knows what is in the code or why and what the implications are especialy with edge cases and hidden hooks. Basicaly we have “big code bases” but as a side effect overwhelming complexity both visable and not so visable, little of which gets tested or even edited out.

Infact rather than edit out the designers try to add more hooks because they assume the goalposts are going to change and by leaving lots of hooks in they can make changes more rapidly.

In more traditional fields of engineering you have the idea of stability and the base state, taken from nature. Overly simply you can view it as water runs down hill untill it is pooled or cannot get any lower and has reached it’s base state. Engineers thus tend to design a system such that the desired system state is where possible the base state of any individual process. With software there generaly is no “natural base state” thus you have significant problems.

The way engineers generaly deal with problems is with “test point” where you can measure the state of the system at any given point in time. Whilst adding test points in usable ways effects the design and adds complexity it usually does it in a way where it is offset by the aditional degree of control etc it adds.

Whilst in ordinary engineering terms test points are desireable, in security engineering the primary requirment is usually to keep the system state unknown thus they are undesirable. But like any other system they are needed to ensure correct system function. It appears as a catch 22 situation for most people which is why they usually get it wrong.

So as a consiquence you end up with a code base with lots of unknown hooks and edge cases with incorectly deigned and positioned test points, that frequently does what an incorrect design spec calls for…

Which at some hindsight point makes the system look like it was “designed to fail”.

But… all of the resulting mess is fertile ground for those with a desire to subvert the working of the system.

Which give rise to two notions of trust, trust in the system and trust in the user and often they appear to be the opposit of each other…

Stephan Engberg February 28, 2013 5:57 PM

@Clive

Sure when you focus on the software.

But e,g, when the application is runing in cloud, you wouldnt trust the software even if it was perfect as the underlying system is unsecure. There your need to ensure that control is OUTSIDE the system so even if it it broken, security doesnt fail.

Consider a report like this.
http://www.weforum.org/issues/rethinking-personal-data

But take away the identification upfront – then the problem changes drastically. Identificaiton is the security problem, not the solution.

Clive Robinson March 1, 2013 4:57 AM

@ Stephan Engberg,

Consider a report like this

The page I get from the link has a number of reports, so I’m not entirely sure which you are refering to.

That said “Personal Data” is an awkward subject that highlights another issue which is the difference between the “computing stack” and the “computer security stack”. In the computing stack data is generaly considered at a lower level than algorithms whilst in the computer security stack data is considered above applications. This in of it’s self creates lack of clarity and thus design dissonance (a subset of cognitive dissonance).

Part of this is the distinction or lack thereof of data, meta-data, meta-meta-data and meta-meta-meta-data. Where meta-data can be considered the description of the bits in the data containers, including the algorithms for processing it (ie a floating point number and how to +-x/), meta-meta-data the description of the data containers and the datas functional use and meta-meta-meta-data which starts dealing with the associated atributes of the data which is where the security asspects start to come in. This latter layer is often spoken of as a data framework.

The security issue with frameworks is that the data and attributes of the data are held in seperate containers so can be seperated. That is the data can be stripped of it’s original attributes and have different atributes added.

We currently do not have a solution for this issue other than storing data in an encrypted form and trusting the application that decrypts the data to make it usable won’t in some way leak the data.

Because of this currently unresolved issue people have looked at other ways to achive privacy without having to rely on trust. One such now debunked method was anonomysation. The problem being that data aggregation weakens anonomysation to the point that it can be stripped away layer by layer untill it’s gone. The only way to stop aggregation doing this iss to make the data so weak it is just about usless.

The problem for individuals is currently two fold,

1, Showing the data alows it to be copied.
2, In many juresdictions data ownership is transfered to the person who collects it.

Untill we change both of these the only sensible thing for an individual to do is not release personal data in any way shape or form at any time to any person or system.

Unfortunatly that is not what various well funded entities want thus I have no hope of the legislation ever being changed effectivly (see current tricks to get around FOI for instance). As for tying atributes to data irrevocably whilst there are ways it might be achieved the only current ideas are so grossely innefficient as not to be either realistic or practical.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.