Essays in the Category “Trust”
China is considering a new "social credit" system, designed to rate everyone's trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.
Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect.
This essay is part of a conversation with Gloria Origgi entitled "What is Reputation?" Other participants were Abbas Raza, William Poundstone, Hugo Mercier, Quentin Hardy, Martin Nowak and Roger Highfield, Bruce Schneier, and Kai Krause.
Reputation is a social mechanism by which we come to trust one another, in all aspects of our society. I see it as a security mechanism. The promise and threat of a change in reputation entices us all to be trustworthy, which in turn enables others to trust us. In a very real sense, reputation enables friendships, commerce, and everything else we do in society.
Portuguese translation by Ricardo R Hashimoto
For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars' computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren't being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.
I've recently seen two articles speculating on the NSA's capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking—and I have no idea whether any of it is true or not—but it's a good illustration of what happens when trust in a public institution fails.
The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress.
I jacked a visitor's badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they're enabled when you check in at building security. You're supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.
I kept the badge.
Ever since Edward Snowden walked out of a National Security Agency facility in May with electronic copies of thousands of classified documents, the finger-pointing has concentrated on government's security failures. Yet the debacle illustrates the challenge with trusting people in any organization.
The problem is easy to describe. Organizations require trusted people, but they don't necessarily know whether those people are trustworthy.
In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company's blog to deny it.
Turns out that wasn't quite true.
Or at least he—or the company's lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn't changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.
At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn't collect data on hundreds of millions of Americans.
This morning, I flew from Boston to New York. Before that, I woke up in a hotel, trusting everyone on the staff who has a master key. I took a Boston taxi to the airport, trusting not just the taxi driver, but everyone else on the road. At Boston's Logan Airport, I had to trust everyone who worked for the airline, everyone who worked at the airport, and the thousands of other passengers. I also had to trust everyone who came in contact with the food I bought and ate before boarding my plane.
Society runs on trust. Over the millennia, we've developed a variety of mechanisms to induce trustworthy behavior in society. These range from a sense of guilt when we cheat, to societal disapproval when we lie, to laws that arrest fraudsters, to door locks and burglar alarms that keep thieves out of our homes. They're complicated and interrelated, but they tend to keep society humming along.
Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone.
I CAN put my cash card into an ATM anywhere in the world and take out a fistful of local currency, while the corresponding amount is debited from my bank account at home. I don't even think twice: regardless of the country, I trust that the system will work.
The whole world runs on trust. We trust that people on the street won't rob us, that the bank we deposited money in last month returns it this month, that the justice system punishes the guilty and exonerates the innocent.
My big idea is a big question. Every cooperative system contains parasites. How do we ensure that society's parasites don't destroy society's systems?
It's all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society.
Security technologist and author Bruce Schneier looks at the age-old problem of insider threat
Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organisation's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything, and then replicate itself on all 4,000 Fannie Mae servers.
Cloud computing may represent the future of computing but users still need to be careful about who is looking after their data
This year's overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.
Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything -- and then replicate itself on all 4,000 Fannie Mae servers.
Sports referees are supposed to be fair and impartial. They're not supposed to favor one team over another. And they're most certainly not supposed to have a financial interest in the outcome of a game.
Tim Donaghy, referee for the National Basketball Association, has been accused of both betting on basketball games and fixing games for the mob.
If you fly out of Logan Airport and don't want to take off your shoes for the security screeners and get your bags opened up, pay attention. The US government is testing its "Trusted Traveler" program, and Logan is the fourth test airport. Currently, only American Airlines frequent fliers are eligible, but if all goes well the program will be opened up to more people and more airports.
Participants provide their name, address, phone number, and birth date, a set of fingerprints, and a retinal scan.
Controlling what a user can do with a piece of data assumes a trust paradigm that doesn't exist in the real world. Software copy protection, intellectual property theft, digital watermarking-different companies claim to solve different parts of this growing problem. Some companies market e-mail security solutions in which the e-mail cannot be read after a certain date, effectively "deleting" it. Other companies sell rights-management software: audio and video files that can't be copied or redistributed, data that can be read but not printed and software that can't be copied.
Magazine articles like to describe cryptography products in terms of algorithms and key length. Algorithms make good sound bites: they can be explained in a few words and they're easy to compare with one another. "128-bit keys mean good security." "Triple-DES means good security." "40-bit keys mean weak security." "2048-bit RSA is better than 1024-bit RSA."
But reality isn't that simple. Longer keys don't always mean more security.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.