Page 453

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Syrian Electronic Army Cyberattacks

The Syrian Electronic Army attacked again this week, compromising the websites of the New York Times, Twitter, the Huffington Post, and others.

Political hacking isn’t new. Hackers were breaking into systems for political reasons long before commerce and criminals discovered the Internet. Over the years, we’ve seen U.K. vs. Ireland, Israel vs. Arab states, Russia vs. its former Soviet republics, India vs. Pakistan, and US vs. China.

There was a big one in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia. It was hyped as the first cyberwar, but the Kremlin denied any Russian government involvement. The only individuals positively identified were young ethnic Russians living in Estonia.

Poke at any of these international incidents, and what you find are kids playing politics. The Syrian Electronic Army doesn’t seem to be an actual army. We don’t even know if they’re Syrian. And—to be fair—I don’t know their ages. Looking at the details of their attacks, it’s pretty clear they didn’t target the New York Times and others directly. They reportedly hacked into an Australian domain name registrar called Melbourne IT, and used that access to disrupt service at a bunch of big-name sites.

We saw this same tactic last year from Anonymous: hack around at random, then retcon a political reason why the sites they successfully broke into deserved it. It makes them look a lot more skilled than they actually are.

This isn’t to say that cyberattacks by governments aren’t an issue, or that cyberwar is something to be ignored. Attacks from China reportedly are a mix of government-executed military attacks, government-sponsored independent attackers, and random hacking groups that work with tacit government approval. The US also engages in active cyberattacks around the world. Together with Israel, the US employed a sophisticated computer virus (Stuxnet) to attack Iran in 2010.

For the typical company, defending against these attacks doesn’t require anything different than what you’ve been traditionally been doing to secure yourself in cyberspace. If your network is secure, you’re secure against amateur geopoliticians who just want to help their side.

This essay originally appeared on the Wall Street Journal’s website.

Posted on September 3, 2013 at 1:45 PMView Comments

Our Newfound Fear of Risk

We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and—paradoxically—they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes—a wrong address on a warrant, a misunderstanding—result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade—including the wars in Iraq and Afghanistan—money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world—among the wealthier of us who live in peaceful Western countries—our lives have become safer.

Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are—and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline—no matter how benign the infraction—the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved—medicine is the big one, but other sciences as well—we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect—and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

EDITED TO ADD (8/5): Slashdot thread.

Posted on September 3, 2013 at 6:41 AMView Comments

Opsec Details of Snowden Meeting with Greenwald and Poitras

I don’t like stories about the personalities in the Snowden affair, because it detracts from the NSA and the policy issues. But I’m a sucker for operational security, and just have to post this detail from their first meeting in Hong Kong:

Snowden had instructed them that once they were in Hong Kong, they were to go at an appointed time to the Kowloon district and stand outside a restaurant that was in a mall connected to the Mira Hotel. There, they were to wait until they saw a man carrying a Rubik’s Cube, then ask him when the restaurant would open. The man would answer their question, but then warn that the food was bad.

Actually, the whole article is interesting. The author is writing a book about surveillance and privacy, one of probably a half dozen about the Snowden affair that will come out this year.

EDITED TO ADD (8/31): While we’re on the topic, here’s some really stupid opsec on the part of Greenwald and Poitras:

  • Statement from senior Cabinet Office civil servant to #miranda case says material was 58000 ‘highly classified UK intelligence documents
  • Police who seized documents from #miranda found among them a piece of paper with the decryption password, the statement says
  • This password allowed them to decrypt one file on his seized hard drive, adds Oliver Robbins, Cabinet Office security adviser #miranda

You can’t do this kind of stuff when you’re playing with the big boys.

Posted on August 30, 2013 at 1:54 PMView Comments

More on the NSA Commandeering the Internet

If there’s any confirmation that the U.S. government has commandeered the Internet for worldwide surveillance, it is what happened with Lavabit earlier this month.

Lavabit is—well, was—an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use. It was a small company, owned and operated by Ladar Levison, and it was popular among the tech-savvy. NSA whistleblower Edward Snowden among its half-million users.

Last month, Levison reportedly received an order—probably a National Security Letter—to allow the NSA to eavesdrop on everyone’s e-mail accounts on Lavabit. Rather than “become complicit in crimes against the American people,” he turned the service off. Note that we don’t know for sure that he received a NSL—that’s the order authorized by the Patriot Act that doesn’t require a judge’s signature and prohibits the recipient from talking about it—or what it covered, but Levison has said that he had complied with requests for individual e-mail access in the past, but this was very different.

So far, we just have an extreme moral act in the face of government pressure. It’s what happened next that is the most chilling. The government threatened him with arrest, arguing that shutting down this e-mail service was a violation of the order.

There it is. If you run a business, and the FBI or NSA want to turn it into a mass surveillance tool, they believe they can do so, solely on their own initiative. They can force you to modify your system. They can do it all in secret and then force your business to keep that secret. Once they do that, you no longer control that part of your business. You can’t shut it down. You can’t terminate part of your service. In a very real sense, it is not your business anymore. It is an arm of the vast U.S. surveillance apparatus, and if your interest conflicts with theirs then they win. Your business has been commandeered.

For most Internet companies, this isn’t a problem. They are already engaging in massive surveillance of their customers and users—collecting and using this data is the primary business model of the Internet—so it’s easy to comply with government demands and give the NSA complete access to everything. This is what we learned from Edward Snowden. Through programs like PRISM, BLARNEY and OAKSTAR, the NSA obtained bulk access to services like Gmail and Facebook, and to Internet backbone connections throughout the US and the rest of the world. But if it were a problem for those companies, presumably the government would not allow them to shut down.

To be fair, we don’t know if the government can actually convict someone of closing a business. It might just be part of their coercion tactics. Intimidation, and retaliation, is part of how the NSA does business.

Former Qwest CEO Joseph Nacchio has a story of what happens to a large company that refuses to cooperate. In February 2001—before the 9/11 terrorist attacks—the NSA approached the four major US telecoms and asked for their cooperation in a secret data collection program, the one we now know to be the bulk metadata collection program exposed by Edward Snowden. Qwest was the only telecom to refuse, leaving the NSA with a hole in its spying efforts. The NSA retaliated by canceling a series of big government contracts with Qwest. The company has since been purchased by CenturyLink, which we presume is more cooperative with NSA demands.

That was before the Patriot Act and National Security Letters. Now, presumably, Nacchio would just comply. Protection rackets are easier when you have the law backing you up.

As the Snowden whistleblowing documents continue to be made public, we’re getting further glimpses into the surveillance state that has been secretly growing around us. The collusion of corporate and government surveillance interests is a big part of this, but so is the government’s resorting to intimidation. Every Lavabit-like service that shuts down—and there have been several—gives us consumers less choice, and pushes us into the large services that cooperate with the NSA. It’s past time we demanded that Congress repeal National Security Letters, give us privacy rights in this new information age, and force meaningful oversight on this rogue agency.

This essay previously appeared in USA Today.

EDITED TO ADD: This essay has been translated into Danish.

Posted on August 30, 2013 at 6:12 AMView Comments

How Many Leakers Came Before Snowden?

Assume it’s really true that the NSA has no idea what documents Snowden took, and that they wouldn’t even know he’d taken anything if he hadn’t gone public. The fact that abuses of their systems by NSA officers were largely discovered through self-reporting substantiates that belief.

Given that, why should anyone believe that Snowden is the first person to walk out the NSA’s door with multiple gigabytes of classified documents? He might be the first to release documents to the public, but it’s a reasonable assumption that the previous leakers were working for Russia, or China, or elsewhere.

Posted on August 29, 2013 at 1:13 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.