The Meaning of Trust

Security technologist and author Bruce Schneier looks at the age-old problem of insider threat

  • Bruce Schneier
  • The Guardian
  • April 16, 2010

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organisation’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything, and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars.

Luckily, another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives, motives that can only intensify as the economy continues to suffer and layoffs increase. Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act -sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people. Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex.

Good security systems use multiple measures, all working together. In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

Sidebar: Damage Limitation

Inside threats are much, much older than computers. And the solutions haven’t changed much throughout history, either. Here are five basic techniques to deal with trusted people:

  1. Limit the number of trusted people.
  2. Ensure that trusted people are also trustworthy. this is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on.
  3. Limit the amount of trust each person has. this is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy.
  4. Give people overlapping spheres of trust. this is what security professionals call defense in depth. it’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value.
  5. Detect breaches of trust after the fact and prosecute the guilty. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system. this is why audit is so vital.

Categories: Theory of Security, Trust

Sidebar photo of Bruce Schneier by Joe MacInnis.