January 15, 2016
by Bruce Schneier
CTO, Resilient Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2016/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- The Internet of Things that Talk About You Behind Your Back
- Using Law against Technology
- DMCA and the Internet of Things
- NSA Spies on Israeli Prime Minister
- Replacing Judgment with Algorithms
- Schneier News
- IT Security and the Normalization of Deviance
Your computerized things are talking about you behind your back, and for the most part you can’t stop them—or even learn what they’re saying.
This isn’t new, but it’s getting worse.
Surveillance is the business model of the Internet, and the more these companies know about the intimate details of your life, the more they can profit from it. Already there are dozens of companies that secretly spy on you as you browse the Internet, connecting your behavior on different sites and using that information to target advertisements. You know it when you search for something like a Hawaiian vacation, and ads for similar vacations follow you around the Internet for weeks. Companies like Google and Facebook make an enormous profit connecting the things you write about and are interested in with companies trying to sell you things.
Cross-device tracking is the latest obsession for Internet marketers. You probably use multiple Internet devices: your computer, your smartphone, your tablet, maybe your Internet-enabled television—and, increasingly, “Internet of Things” devices like smart thermostats and appliances. All of these devices are spying on you, but the different spies are largely unaware of each other. Start-up companies like SilverPush, 4Info, Drawbridge, Flurry, and Cross Screen Consultants, as well as the big players like Google, Facebook, and Yahoo, are all experimenting with different technologies to “fix” this problem.
Retailers want this information very much. They want to know whether their television advertising causes people to search for their products on the Internet. They want to correlate people’s web searching on their smartphones with their buying behavior on their computers. They want to track people’s locations using the surveillance capabilities of their smartphones, and use that information to send geographically targeted ads to their computers. They want the surveillance data from smart appliances correlated with everything else.
This is where the Internet of Things makes the problem worse. As computers get embedded into more of the objects we live with and use, and permeate more aspects of our lives, more companies want to use them to spy on us without our knowledge or consent.
Technically, of course, we did consent. The license agreement we didn’t read but legally agreed to when we unthinkingly clicked “I agree” on a screen, or opened a package we purchased, gives all of those companies the legal right to conduct all of this surveillance. And the way US privacy law is currently written, they own all of that data and don’t need to allow us to see it.
We accept all of this Internet surveillance because we don’t really think about it. If there were a dozen people from Internet marketing companies with pens and clipboards peering over our shoulders as we sent our Gmails and browsed the Internet, most of us would object immediately. If the companies that made our smartphone apps actually followed us around all day, or if the companies that collected our license plate data could be seen as we drove, we would demand they stop. And if our televisions, computer, and mobile devices talked about us and coordinated their behavior in a way we could hear, we would be creeped out.
The Federal Trade Commission is looking at cross-device tracking technologies, with an eye to regulating them. But if recent history is a guide, any regulations will be minor and largely ineffective at addressing the larger problem.
We need to do better. We need to have a conversation about the privacy implications of cross-device tracking, but—more importantly—we need to think about the ethics of our surveillance economy. Do we want companies knowing the intimate details of our lives, and being able to store that data forever? Do we truly believe that we have no rights to see the data that’s collected about us, to correct data that’s wrong, or to have data deleted that’s personal or embarrassing? At a minimum, we need limits on the behavioral data that can legally be collected about us and how long it can be stored, a right to download data collected about us, and a ban on third-party ad tracking. The last one is vital: it’s the companies that spy on us from website to website, or from device to device, that are doing the most damage to our privacy.
The Internet surveillance economy is less than 20 years old, and emerged because there was no regulation limiting any of this behavior. It’s now a powerful industry, and it’s expanding past computers and smartphones into every aspect of our lives. It’s long past time we set limits on what these computers, and the companies that control them, can say about us and do to us behind our backs.
This essay previously appeared on Vice Motherboard.
Surveillance is the business model of the Internet:
Smartphone apps that follow us around:
License plate data collection:
Ethics of our surveillance economy:
In mid-December, a Brazilian judge ordered the text messaging service WhatsApp shut down for 48 hours. It was a monumental action.
WhatsApp is the most popular app in Brazil, used by about 100 million people. The Brazilian telecoms hate the service because it entices people away from more expensive text messaging services, and they have been lobbying for months to convince the government that it’s unregulated and illegal. A judge finally agreed.
In Brazil’s case, WhatsApp was blocked for allegedly failing to respond to a court order. Another judge reversed the ban 12 hours later, but there is a pattern forming here. In Egypt, Vodafone has complained about the legality of WhatsApp’s free voice-calls, while India’s telecoms firms have been lobbying hard to curb messaging apps such as WhatsApp and Viber. Earlier this year, the United Arab Emirates blocked WhatsApp’s free voice call feature.
All this is part of a massive power struggle going on right now between traditional companies and new Internet companies, and we’re all in the blast radius.
It’s one aspect of a tech policy problem that has been plaguing us for at least 25 years: technologists and policymakers don’t understand each other, and they inflict damage on society because of that. But it’s worse today. The speed of technological progress makes it worse. And the types of technology—especially the current Internet of mobile devices everywhere, cloud computing, always-on connections and the Internet of Things—make it worse.
The Internet has been disrupting and destroying long-standing business models since its popularization in the mid-1990s. And traditional industries have long fought back with every tool at their disposal. The movie and music industries have tried for decades to hamstring computers in an effort to prevent illegal copying of their products. Publishers have battled with Google over whether their books could be indexed for online searching.
More recently, municipal taxi companies and large hotel chains are fighting with ride-sharing companies such as Uber and apartment-sharing companies such as Airbnb. Both the old companies and the new upstarts have tried to bend laws to their will in an effort to outmaneuver each other.
Sometimes the actions of these companies harm the users of these systems and services. And the results can seem crazy. Why would the Brazilian telecoms want to provoke the ire of almost everyone in the country? They’re trying to protect their monopoly. If they win in not just shutting down WhatsApp, but Telegram and all the other text-message services, their customers will have no choice. This is how high-stakes these battles can be.
This isn’t just companies competing in the marketplace. These are battles between competing visions of how technology should apply to business, and traditional businesses and “disruptive” new businesses. The fundamental problem is that technology and law are in conflict, and what’s worked in the past is increasingly failing today.
First, the speeds of technology and law have reversed. Traditionally, new technologies were adopted slowly over decades. There was time for people to figure them out, and for their social repercussions to percolate through society. Legislatures and courts had time to figure out rules for these technologies and how they should integrate into the existing legal structures.
They don’t always get it right—the sad history of copyright law in the United States is an example of how they can get it badly wrong again and again—but at least they had a chance before the technologies become widely adopted.
That’s just not true anymore. A new technology can go from zero to a hundred million users in a year or less. That’s just too fast for the political or legal process. By the time they’re asked to make rules, these technologies are well-entrenched in society.
Second, the technologies have become more complicated and specialized. This means that the normal system of legislators passing laws, regulators making rules based on those laws and courts providing a second check on those rules fails. None of these people has the expertise necessary to understand these technologies, let alone the subtle and potentially pernicious ramifications of any rules they make.
We see the same thing between governments and law-enforcement and militaries. In the United States, we’re expecting policymakers to understand the debate between the FBI’s desire to read the encrypted e-mails and computers of crime suspects and the security researchers who maintain that giving them that capability will render everyone insecure. We’re expecting legislators to provide meaningful oversight over the National Security Agency, when they can only read highly technical documents about the agency’s activities in special rooms and without any aides who might be conversant in the issues.
The result is that we end up in situations such as the one Brazil finds itself in. WhatsApp went from zero to 100 million users in five years. The telecoms are advancing all sorts of weird legal arguments to get the service banned, and judges are ill-equipped to separate fact from fiction.
This isn’t a simple matter of needing government to get out of the way and let companies battle in the marketplace. These companies are for-profit entities, and their business models are so complicated that they regularly don’t do what’s best for their users. (For example, remember that you’re not really Facebook’s customer. You’re their product.)
The fact that people’s resumes are effectively the first 10 hits on a Google search of their name is a problem—something that the European “right to be forgotten” tried ham-fistedly to address. There’s a lot of smart writing that says that Uber’s disruption of traditional taxis will be worse for the people who regularly use the services. And many people worry about Amazon’s increasing dominance of the publishing industry.
We need a better way of regulating new technologies.
That’s going to require bridging the gap between technologists and policymakers. Each needs to understand the other—not enough to be experts in each other’s fields, but enough to engage in meaningful conversations and debates. That’s also going to require laws that are agile and written to be as technologically invariant as possible.
It’s a tall order, I know, and one that has been on the wish list of every tech policymaker for decades. But today, the stakes are higher and the issues come faster. Not doing so will become increasingly harmful for all of us.
This essay originally appeared on CNN.com.
Our resume is our first ten hits on Google:
Good essay on Uber:
In theory, the Internet of Things—the connected network of tiny computers inside home appliances, household objects, even clothing—promises to make your life easier and your work more efficient. These computers will communicate with each other and the Internet in homes and public spaces, collecting data about their environment and making changes based on the information they receive. In theory, connected sensors will anticipate your needs, saving you time, money, and energy.
Except when the companies that make these connected objects act in a way that runs counter to the consumer’s best interests—as the technology company Philips did recently with its smart ambient-lighting system, Hue, which consists of a central controller that can remotely communicate with light bulbs. In mid-December, the company pushed out a software update that made the system incompatible with some other manufacturers’ light bulbs, including bulbs that had previously been supported.
The complaints began rolling in almost immediately. The Hue system was supposed to be compatible with an industry standard called ZigBee, but the bulbs that Philips cut off were ZigBee-compliant. Philips backed down and restored compatibility a few days later.
But the story of the Hue debacle—the story of a company using copy protection technology to lock out competitors—isn’t a new one. Plenty of companies set up proprietary standards to ensure that their customers don’t use someone else’s products with theirs. Keurig, for example, puts codes on its single-cup coffee pods, and engineers its coffeemakers to work only with those codes. HP has done the same thing with its printers and ink cartridges.
To stop competitors just reverse-engineering the proprietary standard and making compatible peripherals (for example, another coffee manufacturer putting Keurig’s codes on its own pods), these companies rely on a 1998 law called the Digital Millennium Copyright Act (DCMA). The law was originally passed to prevent people from pirating music and movies; while it hasn’t done a lot of good in that regard (as anyone who uses BitTorrent can attest), it has done a lot to inhibit security and compatibility research.
Specifically, the DMCA includes an anti-circumvention provision, which prohibits companies from circumventing “technological protection measures” that “effectively control access” to copyrighted works. That means it’s illegal for someone to create a Hue-compatible light bulb without Philips’ permission, a K-cup-compatible coffee pod without Keurigs’, or an HP-printer compatible cartridge without HP’s.
By now, we’re used to this in the computer world. In the 1990s, Microsoft used a strategy it called “embrace, extend, extinguish,” in which it gradually added proprietary capabilities to products that already adhered to widely used standards. Some more recent examples: Amazon’s e-book format doesn’t work on other companies’ readers, music purchased from Apple’s iTunes store doesn’t work with other music players, and every game console has its own proprietary game cartridge format.
Because companies can enforce anti-competitive behavior this way, there’s a litany of things that just don’t exist, even though they would make life easier for consumers in significant ways. You can’t have custom software for your cochlear implant, or your programmable thermostat, or your computer-enabled Barbie doll. An auto repair shop can’t design a better diagnostic system that interfaces with a car’s computers. And John Deere has claimed that it owns the software on all of its tractors, meaning the farmers that purchase them are prohibited from repairing or modifying their property.
As the Internet of Things becomes more prevalent, so too will this kind of anti-competitive behavior—which undercuts the purpose of having smart objects in the first place. We’ll want our light bulbs to communicate with a central controller, regardless of manufacturer. We’ll want our clothes to communicate with our washing machines and our cars to communicate with traffic signs.
We can’t have this when companies can cut off compatible products, or use the law to prevent competitors from reverse-engineering their products to ensure compatibility across brands. For the Internet of Things to provide any value, what we need is a world that looks like the automotive industry, where you can go to a store and buy replacement parts made by a wide variety of different manufacturers. Instead, the Internet of Things is on track to become a battleground of competing standards, as companies try to build monopolies by locking each other out.
This essay previously appeared on TheAtlantic.com.
Keurig cup DRM:
HP ink cartridge DRM:
John Deere DRM:
Last month, the city of Los Angeles closed all of its schools—over 1,000 schools—because of a bomb threat. It was a hoax.
New paper: “On the Security and Usability of Crypto Phones,” by Maliheh Shivanian and Nitesh Saxena. Their findings should come as no surprise: users often compromise their own security by making mistakes setting up and using their encryption apps.
The Intercept has “a secret, internal U.S. government catalogue of dozens of cellphone surveillance devices used by the military and by intelligence agencies.” Lot of detailed information about Stingrays and similar equipment.
Gene Spafford writes about the history of “Practical Unix Security.”
If you like puzzles, GCHQ has one for you.
Just don’t let it distract you from fighting the UK legislation giving the GCHQ new surveillance powers.
Juniper has warned about a malicious back door in its firewalls that automatically decrypts VPN traffic. It’s been there for years. It’s a complicated story, and here are some good links that talk about it.
The Intercept published a 2011 GCHQ document outlining its exploit capabilities against Juniper networking equipment, including routers and NetScreen firewalls as part of this article. This doesn’t have much to do with the Juniper backdoor currently in the news, but the document does provide even more evidence that (despite what the government says) the NSA hoards vulnerabilities in commonly used software for attack purposes instead of improving security for everyone by disclosing it.
In thinking about the equities process, it’s worth differentiating among three different things: bugs, vulnerabilities, and exploits. Bugs are plentiful in code, but not all bugs can be turned into vulnerabilities. And not all vulnerabilities can be turned into exploits. Exploits are what matter; they’re what everyone uses to compromise our security. Fixing bugs and vulnerabilities is important because they could potentially be turned into exploits. I think the US government deliberately clouds the issue when they say that they disclose almost all bugs they discover, ignoring the much more important question of how often they disclose exploits they discover. What this document shows is that—despite their insistence that they prioritize security over surveillance—they like to hoard exploits against commonly used network equipment.
Note: In case anyone is researching this issue, here is my complete list of useful links on various different aspects of the ongoing debate.
This interesting article by medieval historian Amanda Power traces our culture’s relationship with the concept of mass surveillance from the medieval characterization of the Christian god and how piety was policed by the church.
Two good pieces of writing on the Second Crypto War. The first is “Wanting It Bad Enough Won’t Make It Work: Why Adding Backdoors and Weakening Encryption Threatens the Internet,” by Meredith Whittaker and Ben Laurie.
The second is “The Second Crypto War Is Not about Crypto,” by Jaap-Henk Hoepman.
This weird story describes a “porn dog” that is trained to find hidden hard drives. It’s used in child porn investigations. I suppose it’s reasonable that computer disks have a particular chemical smell, but I wonder what it is.
Earlier this month, a Las Vegas taco shop was robbed in the middle of the night. The restaurant took the video-surveillance footage and turned it into a combination commercial for their tacos and request for help identifying the burglars.
Brian Krebs has a story about PayPal’s substandard authentication. Bottom line: PayPal has no excuse for this kind of stuff. I hope the public shaming incents them to offer better authentication for their customers.
A lot of Pennsylvania government officials are being hurt as a result of e-mails being made public. This is all the result of a political pressure to release the e-mails, and not an organizational doxing attack, but the effects are the same.
Our psychology of e-mail doesn’t match the reality. We treat it as ephemeral, even though it’s not. And the archival nature of e-mail—or text messages, or Twitter chats, or Facebook conversations—isn’t salient.
Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He’s writing about self-driving cars, but the issue is much more general. Basically, we’re going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn’t work, and trying introduces all sorts of security vulnerabilities. So we have a problem.
This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it’s one we’re not anywhere near prepared to solve.
De-anonymizing users from their coding styles.
Micah Lee wrote a good article that talks about how Microsoft is collecting the hard-drive encryption keys of Windows 10 users, and how to disable that “feature.”
More useful information:
Recently declassified: “Deception Maxims: Fact and Folklore,” Office of Research and Development, Central Intelligence Agency, June 1981. Research on deception and con games has advanced in the past 35 years, but this is still interesting to read.
There’s an excellent article in Foreign Affairs on how the European insistence on data privacy—most recently illustrated by their invalidation of the “safe harbor” agreement—is really about the US talking out of both sides of its mouth on the issue: championing privacy in public, but spying on everyone in private. As long as the US keeps this up, the authors argue, this issue will get worse.
Stewart Baker on the same topic:
Nice essay that lists ten “truths” about terrorism. Nothing will be news to regular readers of this newsletter
Fascinating “New Yorker” article about Samantha Azzopardi, serial con artist and deceiver.
The article is really about how our brains allow stories to deceive us.
Mac OS X, iOS, and Flash had the most discovered vulnerabilities in 2015. The article goes on to explain why Windows vulnerabilities might be counted higher, and gives the top 50 software packages for vulnerabilities.
The interesting discussion topic is how this relates to how secure the software is. Is software with more discovered vulnerabilities better because they’re all fixed? Is software with more discovered vulnerabilities less secure because there are so many? Or are they all equally bad, and people just look at some software more than others? No one knows.
Last week, former NSA Director Michael Hayden made a very strong argument against deliberately weakening security products by adding backdoors.
This isn’t new, and is yet another example of the split between the law-enforcement and intelligence communities on this issue. What is new is Hayden saying, effectively: Hey FBI, you guys are idiots for trying to get back doors.
On the other side of the Atlantic Ocean, the Dutch government has come out against backdoors in security products, and in favor of strong encryption.
Meanwhile, I have been hearing rumors that “serious” US legislation mandating backdoors is about to be introduced. These rumors are pervasive, but without details.
An analysis of the opsec Sean Penn used for his meeting with El Chapo. There has been lots of speculation about whether this was enough, or whether Mexican officials tracked El Chapo down because of his meeting with Penn.
Cory Doctorow has a good post on the EFF website about how they’re trying to fight digital rights management software in the World Wide Web Consortium. The W3C added DRM to the web’s standards in 2013. This doesn’t reverse that terrible decision, but it’s a step in the right direction.
Troy Hunt has identified a new spam vector. PayPal allows someone to send someone else a $0 invoice. The spam is in the notes field. But it’s a legitimate e-mail from PayPal, so it evades many of the traditional spam filters. Presumably it doesn’t cost anything to send a $0 invoice via PayPal. Hopefully, the company will close this loophole soon.
The Wall Street Journal has a story that the NSA spied on Israeli Prime Minister Benjamin Netanyahu and other Israeli government officials, and incidentally collected conversations between US citizens—including lawmakers—and those officials.
US lawmakers who are usually completely fine with NSA surveillance are aghast at this behavior, as both Glenn Greenwald and Trevor Timm explain. Greenwald:
So now, with yesterday’s WSJ report, we witness the tawdry spectacle of large numbers of people who for years were fine with, responsible for, and even giddy about NSA mass surveillance suddenly objecting. Now they’ve learned that they themselves, or the officials of the foreign country they most love, have been caught up in this surveillance dragnet, and they can hardly contain their indignation. Overnight, privacy is of the highest value because now it’s *their* privacy, rather than just yours, that is invaded.
This reminds me of the 2013 story that the NSA eavesdropped on the cell phone of the German Chancellor Angela Merkel. Back then, I wrote:
Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA is spying on entire populations.
Greenwald said the same thing:
I’ve always argued that on the spectrum of spying stories, revelations about targeting foreign leaders is the least important, since that is the most justifiable type of espionage. Whether the U.S. should be surveilling the private conversations of officials of allied democracies is certainly worth debating, but, as I argued in my 2014 book, those “revelations … are less significant than the agency’s warrantless mass surveillance of whole populations” since “countries have spied on heads of state for centuries, including allies.”
And that’s the key point. I am less concerned about Angela Merkel than the other 82 million Germans that are being spied on, and I am less concerned about Benjamin Netanyahu than I am about the other 8 million people living in that country.
Over on Lawfare, Ben Wittes agrees:
There is absolutely nothing surprising about NSA’s activities here—or about the administration’s activities. There is no reason to expect illegality or impropriety. In fact, the remarkable aspect of this story is how constrained both the administration’s and the agency’s behavior appears to have been by rules and norms in exactly the fashion one would hope to see.
So let’s boil this down to brass tacks: NSA spied on a foreign leader at a time when his country had a major public foreign policy showdown with the President of the United States over a sharp differences between the two countries over Iran’s nuclearization—indeed, at a time when the US believed that leader was contemplating military action without advance notice to the United States. In the course of this surveillance, NSA incidentally collected communications involving members of Congress, who were being heavily lobbied by the Israeli government and Netanyahu personally. There is no indication that the members of Congress were targeted for collection. Moreover, there’s no indication that the rules that govern incidental collection involving members of Congress were not followed. The White House, for its part, appears to have taken a hands-off approach, directing NSA to follow its own policies about what to report, even on a sensitive matter involving delicate negotiations in a tense period with an ally.
The words that really matter are “incidental collection.” I have no doubt that the NSA followed its own rules in that regard. The discussion we need to have is about whether those rules are the correct ones. Section 702 incidental collection is a huge loophole that allows the NSA to collect information on millions of innocent Americans.
This claim of “incidental collection” has always been deceitful, designed to mask the fact that the NSA does indeed frequently spy on the conversations of American citizens without warrants of any kind. Indeed, as I detailed here, the 2008 FISA law enacted by Congress had as one of its principal, explicit purposes allowing the NSA to eavesdrop on Americans’ conversations *without warrants of any kind*. “The principal purpose of the 2008 law was to make it possible for the government to collect Americans’ international communications—and to collect those communications without reference to whether any party to those communications was doing anything illegal,” the ACLU’s Jameel Jaffer said. “And a lot of the government’s advocacy is meant to obscure this fact, but it’s a crucial one: The government doesn’t need to ‘target’ Americans in order to collect huge volumes of their communications.”
If you’re a member of Congress, there are special rules that the NSA has to follow if you’re incidentally spied on:
Special safeguards for lawmakers, dubbed the “Gates Rule,” were put in place starting in the 1990s. Robert Gates, who headed the Central Intelligence Agency from 1991 to 1993, and later went on to be President Barack Obama’s Defense Secretary, required intelligence agencies to notify the leaders of the congressional intelligence committees whenever a lawmaker’s identity was revealed to an executive branch official.
If you’re a regular American citizen, don’t expect any such notification. Your information can be collected, searched, and then saved for later searching, without a warrant. And if you’re a common German, Israeli, or any other countries’ citizen, you have even fewer rights.
In 2014, I argued that we need to separate the NSA’s espionage mission against target agents for a foreign power from any broad surveillance of Americans. I still believe that. But more urgently, we need to reform Section 702 when it comes up for reauthorization in 2017.
Glenn Greenwald on the story:
Trevor Timm on the story:
Ben Wittes on the story:
Section 702 incidental collection:
NSA surveillance rules regarding Congress:
My argument to break up the NSA:
Jake Laperruque on the story:
Marcy Wheeler on the story:
China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.
Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data—and more algorithms—we need to ensure that we reap the benefits while avoiding the harms.
Right now, the Chinese government is watching how companies use “social credit” scores in state-approved pilot projects. The most prominent one is Sesame Credit, and it’s much more than a financial scoring system.
Citizens are judged not only by conventional financial criteria, but by their actions and associations. Rumors abound about how this system works. Various news sites are speculating that your score will go up if you share a link from a state-sponsored news agency and go down if you post pictures of Tiananmen Square. Similarly, your score will go up if you purchase local agricultural products and down if you purchase Japanese anime. Right now the worst fears seem overblown, but could certainly come to pass in the future.
This story has spread because it’s just the sort of behavior you’d expect from the authoritarian government in China. But there’s little about the scoring systems used by Sesame Credit that’s unique to China. All of us are being categorized and judged by similar algorithms, both by companies and by governments. While the aim of these systems might not be social control, it’s often the byproduct. And if we’re not careful, the creepy results we imagine for the Chinese will be our lot as well.
Sesame Credit is largely based on a US system called FICO. That’s the system that determines your credit score. You actually have a few dozen different ones, and they determine whether you can get a mortgage, car loan or credit card, and what sorts of interest rates you’re offered. The exact algorithm is secret, but we know in general what goes into a FICO score: how much debt you have, how good you’ve been at repaying your debt, how long your credit history is and so on.
There’s nothing about your social network, but that might change. In August, Facebook was awarded a patent on using a borrower’s social network to help determine if he or she is a good credit risk. Basically, your creditworthiness becomes dependent on the creditworthiness of your friends. Associate with deadbeats, and you’re more likely to be judged as one.
Your associations can be used to judge you in other ways as well. It’s now common for employers to use social media sites to screen job applicants. This manual process is increasingly being outsourced and automated; companies like Social Intelligence, Evolv and First Advantage automatically process your social networking activity and provide hiring recommendations for employers. The dangers of this type of system—from discriminatory biases resulting from the data to an obsession with scores over more social measures—are too many.
The company Klout tried to make a business of measuring your online influence, hoping its proprietary system would become an industry standard used for things like hiring and giving out free product samples.
The US government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job. In 2012, a British tourist’s tweet caused the US to deny him entry into the country. We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.
All of these systems, from Sesame Credit to the NSA’s secret algorithms, are made possible by computers and data. A couple of generations ago, you would apply for a home mortgage at a bank that knew you, and a bank manager would make a determination of your creditworthiness. Yes, the system was prone to all sorts of abuses, ranging from discrimination to an old-boy network of friends helping friends. But the system also couldn’t scale. It made no sense for a bank across the state to give you a loan, because they didn’t know you. Loans stayed local.
FICO scores changed that. Now, a computer crunches your credit history and produces a number. And you can take that number to any mortgage lender in the country. They don’t need to know you; your score is all they need to decide whether you’re trustworthy.
This score enabled the home mortgage, car loan, credit card and other lending industries to explode, but it brought with it other problems. People who don’t conform to the financial norm—having and using credit cards, for example—can have trouble getting loans when they need them. The automatic nature of the system enforces conformity.
The secrecy of the algorithms further pushes people toward conformity. If you are worried that the US government will classify you as a potential terrorist, you’re less likely to friend Muslims on Facebook. If you know that your Sesame Credit score is partly based on your not buying “subversive” products or being friends with dissidents, you’re more likely to overcompensate by not buying anything but the most innocuous books or corresponding with the most boring people.
Uber is an example of how this works. Passengers rate drivers and drivers rate passengers; both risk getting booted out of the system if their rankings get too low. This weeds out bad drivers and passengers, but also results in marginal people being blocked from the system, and everyone else trying to not make any special requests, avoid controversial conversation topics, and generally behave like good corporate citizens.
Many have documented a chilling effect among American Muslims, with them avoiding certain discussion topics lest they be taken the wrong way. Even if nothing would happen because of it, their free speech has been curtailed because of the secrecy surrounding government surveillance. How many of you are reluctant to Google “pressure cooker bomb”? How many are a bit worried that I used it in this essay?
This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.
It doesn’t have to be this way. We can get the benefits of automatic algorithmic systems while avoiding the dangers. It’s not even hard.
The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.
The second step is for these systems to be subject to oversight and accountability. It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.
We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it’s by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.
Sesame Credit sounds like a dystopia because we can easily imagine how the Chinese government can use a system like this to enforce conformity and stifle dissent. Our own systems seem safer, because we don’t believe the corporations and governments that run them are malevolent. But the dangers are inherent in the technologies. As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.
This essay previously appeared on CNN.com.
Employer use of this data:
Chilling effect of surveillance:
The necessity of oversight and accountability:
The Technoskeptic has posted a good interview with me on its website. Normally it charges for its content, but this interview is available for free.
Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:
Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.
The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.
I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.
As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.
None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors that contribute to the normalization of deviance:
The rules are stupid and inefficient! Knowledge is imperfect and uneven. The work itself, along with new technology, can disrupt work
behaviors and rule compliance.
I’m breaking the rule for the good of my patient! The rules don’t apply to me/you can trust me. Workers are afraid to speak up. Leadership withholding or diluting findings on system problems.
Dan Luu has written about this, too.
I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.
I don’t have any magic solutions here. Banja’s suggestions are good, but general:
Pay attention to weak signals. Resist the urge to be unreasonably optimistic. Teach employees how to conduct emotionally uncomfortable
System operators need to feel safe in speaking up. Realize that oversight and monitoring are never-ending.
The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.
This essay previously appeared on the Resilient Systems blog.
Ron Rapp’s article:
Diane Vaughan on the topic:
People are the weakest link in security:
John Banja’s article:
Dan Luu’s article:
Me on the future of incident response:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.
Copyright (c) 2016 by Bruce Schneier.