November 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0911.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- Beyond Security Theater
- Fear and Overreaction
- Zero-Tolerance Policies
- Security in a Reputation Economy
- Schneier News
- The Commercial Speech Arms Race
- The Doghouse: ADE 651
- “Evil Maid” Attacks on Encrypted Hard Drives
- Is Antivirus Dead?
[I was asked to write this essay for the “New Internationalist” (n. 427, November 2009, pp. 10—13). It’s nothing I haven’t said before, but I’m pleased with how this essay came together.]
Terrorism is rare, far rarer than many people think. It’s rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear. The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don’t think this way: they are far more likely to implement security theater against movie-plot threats.
A movie-plot threat is an overly specific attack scenario. Whether it’s terrorists with crop dusters, terrorists contaminating the milk supply, or terrorists attacking the Olympics, specific stories affect our emotions more intensely than mere data does. Stories are what we fear. It’s not just hypothetical stories: terrorists flying planes into buildings, terrorists with bombs in their shoes or in their water bottles, and terrorists with guns and bombs waging a co-ordinated attack against a city are even scarier movie-plot threats because they actually happened.
Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No-one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11—their guns had no bullets. The US colour-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.
To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: airplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.
Feeling and Reality
Security is both a feeling and a reality. The propensity for security theater comes from the interplay between the public and its leaders. When people are scared, they need something done that will make them feel safe, even if it doesn’t truly make them safer. Politicians naturally want to do something in response to crisis, even if that something doesn’t make any sense.
Often, this “something” is directly related to the details of a recent event: we confiscate liquids, screen shoes, and ban box cutters on airplanes. But it’s not the target and tactics of the last attack that are important, but the next attack. These measures are only effective if we happen to guess what the next terrorists are planning. If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we’ve wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we’ve wasted our money. Terrorists don’t care what they blow up and it shouldn’t be our goal merely to force the terrorists to make a minor change in their tactics or targets.
Our penchant for movie plots blinds us to the broader threats. And security theater consumes resources that could better be spent elsewhere.
Any terrorist attack is a series of events: something like planning, recruiting, funding, practicing, executing, aftermath. Our most effective defenses are at the beginning and end of that process—intelligence, investigation, and emergency response—and least effective when they require us to guess the plot correctly. By intelligence and investigation, I don’t mean the broad data-mining or eavesdropping systems that have been proposed and in some cases implemented—those are also movie-plot stories without much basis in actual effectiveness—but instead the traditional “follow the evidence” type of investigation that has worked for decades.
Unfortunately for politicians, the security measures that work are largely invisible. Such measures include enhancing the intelligence-gathering abilities of the secret services, hiring cultural experts and Arabic translators, building bridges with Islamic communities both nationally and internationally, funding police capabilities—both investigative arms to prevent terrorist attacks, and emergency communications systems for after attacks occur—and arresting terrorist plotters without media fanfare. They do not include expansive new police or spying laws. Our police don’t need any new laws to deal with terrorism; rather, they need apolitical funding. These security measures don’t make good television, and they don’t help, come re-election time. But they work, addressing the reality of security instead of the feeling.
The arrest of the “liquid bombers” in London is an example: they were caught through old-fashioned intelligence and police work. Their choice of target (airplanes) and tactic (liquid explosives) didn’t matter; they would have been arrested regardless.
But even as we do all of this we cannot neglect the feeling of security, because it’s how we collectively overcome the psychological damage that terrorism causes. It’s not security theater we need, it’s direct appeals to our feelings. The best way to help people feel secure is by acting secure around them. Instead of reacting to terrorism with fear, we—and our leaders—need to react with indomitability.
Refuse to Be Terrorized
By not overreacting, by not responding to movie-plot threats, and by not becoming defensive, we demonstrate the resilience of our society, in our laws, our culture, our freedoms. There is a difference between indomitability and arrogant “bring ’em on” rhetoric. There’s a difference between accepting the inherent risk that comes with a free and open society, and hyping the threats.
We should treat terrorists like common criminals and give them all the benefits of true and open justice—not merely because it demonstrates our indomitability, but because it makes us all safer. Once a society starts circumventing its own laws, the risks to its future stability are much greater than terrorism.
Supporting real security even though it’s invisible, and demonstrating indomitability even though fear is more politically expedient, requires real courage. Demagoguery is easy. What we need is leaders willing both to do what’s right and to speak the truth.
Despite fearful rhetoric to the contrary, terrorism is not a transcendent threat. A terrorist attack cannot possibly destroy a country’s way of life; it’s only our reaction to that attack that can do that kind of damage. The more we undermine our own laws, the more we convert our buildings into fortresses, the more we reduce the freedoms and liberties at the foundation of our societies, the more we’re doing the terrorists’ job for them.
We saw some of this in the Londoners’ reaction to the 2005 transport bombings. Among the political and media hype and fearmongering, there was a thread of firm resolve. People didn’t fall victim to fear. They rode the trains and buses the next day and continued their lives. Terrorism’s goal isn’t murder; terrorism attacks the mind, using victims as a prop. By refusing to be terrorized, we deny the terrorists their primary weapon: our own fear.
Today, we can project indomitability by rolling back all the fear-based post-9/11 security measures. Our leaders have lost credibility; getting it back requires a decrease in hyperbole. Ditch the invasive mass surveillance systems and new police state-like powers. Return airport security to pre-9/11 levels. Remove swagger from our foreign policies. Show the world that our legal system is up to the challenge of terrorism. Stop telling people to report all suspicious activity; it does little but make us suspicious of each other, increasing both fear and helplessness.
Terrorism has always been rare, and for all we’ve heard about 9/11 changing the world, it’s still rare. Even 9/11 failed to kill as many people as automobiles do in the US every single month. But there’s a pervasive myth that terrorism is easy. It’s easy to imagine terrorist plots, both large-scale “poison the food supply” and small-scale “10 guys with guns and cars.” Movies and television bolster this myth, so many people are surprised that there have been so few attacks in Western cities since 9/11. Certainly intelligence and investigation successes have made it harder, but mostly it’s because terrorist attacks are actually hard. It’s hard to find willing recruits, to co-ordinate plans, and to execute those plans—and it’s easy to make mistakes.
Counterterrorism is also hard, especially when we’re psychologically prone to muck it up. Since 9/11, we’ve embarked on strategies of defending specific targets against specific tactics, overreacting to every terrorist video, stoking fear, demonizing ethnic groups, and treating the terrorists as if they were legitimate military opponents who could actually destroy a country or a way of life—all of this plays into the hands of terrorists. We’d do much better by leveraging the inherent strengths of our modern democracies and the natural advantages we have over the terrorists: our adaptability and survivability, our international network of laws and law enforcement, and the freedoms and liberties that make our society so enviable. The way we live is open enough to make terrorists rare; we are observant enough to prevent most of the terrorist plots that exist, and indomitable enough to survive the even fewer terrorist plots that actually succeed. We don’t need to pretend otherwise.
It’s hard work being prey. Watch the birds at a feeder. They’re constantly on alert, and will fly away from food—from easy nutrition—at the slightest movement or sound. Given that I’ve never, ever seen a bird plucked from a feeder by a predator, it seems like a whole lot of wasted effort against not very big a threat.
Assessing and reacting to risk is one of the most important things a living creature has to deal with. The amygdala, an ancient part of the brain that first evolved in primitive fishes, has that job. It’s what’s responsible for the fight-or-flight reflex. Adrenaline in the bloodstream, increased heart rate, increased muscle tension, sweaty palms; that’s the amygdala in action. And it works fast, faster than consciousnesses: show someone a snake and their amygdala will react before their conscious brain registers that they’re looking at a snake.
Fear motivates all sorts of animal behaviors. Schooling, flocking, and herding are all security measures. Not only is it less likely that any member of the group will be eaten, but each member of the group has to spend less time watching out for predators. Animals as diverse as bumblebees and monkeys both avoid food in areas where predators are common. Different prey species have developed various alarm calls, some surprisingly specific. And some prey species have even evolved to react to the alarms given off by other species.
Evolutionary biologist Randolph Nesse has studied animal defenses, particularly those that seem to be overreactions. These defenses are mostly all-or-nothing; a creature can’t do them halfway. Birds flying off, sea cucumbers expelling their stomachs, and vomiting are all examples. Using signal detection theory, Nesse showed that all-or-nothing defenses are expected to have many false alarms. “The smoke detector principle shows that the overresponsiveness of many defenses is an illusion. The defenses appear overresponsive because they are ‘inexpensive’ compared to the harms they protect against and because errors of too little defense are often more costly than errors of too much defense.”
So according to the theory, if flight costs 100 calories, both in flying and lost eating time, and there’s a 1 in 100 chance of being eaten if you don’t fly away, it’s smarter for survival to use up 10,000 calories repeatedly flying at the slightest movement even though there’s a 99 percent false alarm rate. Whatever the numbers happen to be for a particular species, it has evolved to get the trade-off right.
This makes sense, until the conditions that the species evolved under change quicker than evolution can react to. Even though there are far fewer predators in the city, birds at my feeder react as if they were in the primal forest. Even birds safe in a zoo’s aviary don’t realize that the situation has changed.
Humans are both no different and very different. We, too, feel fear and react with our amygdala, but we also have a conscious brain that can override those reactions. And we too live in a world very different from the one we evolved in. Our reflexive defenses might be optimized for the risks endemic to living in small family groups in the East African highlands in 100,000 BC, not 2009 New York City. But we can go beyond fear, and actually think sensibly about security.
Far too often, we don’t. We tend to be poor judges of risk. We overreact to rare risks, we ignore long-term risks, we magnify risks that are also morally offensive. We get risks wrong—threats, probabilities, and costs—all the time. When we’re afraid, really afraid, we’ll do almost anything to make that fear go away. Both politicians and marketers have learned to push that fear button to get us to do what they want.
One night last month, I was awakened from my hotel-room sleep by a loud, piercing alarm. There was no way I could ignore it, but I weighed the risks and did what any reasonable person would do under the circumstances: I stayed in bed and waited for the alarm to be turned off. No point getting dressed, walking down ten flights of stairs, and going outside into the cold for what invariably would be a false alarm—serious hotel fires are very rare. Unlike the bird in an aviary, I knew better.
You can disagree with my risk calculus, and I’m sure many hotel guests walked downstairs and outside to the designated assembly point. But it’s important to recognize that the ability to have this sort of discussion is uniquely human. And we need to have the discussion repeatedly, whether the topic is the installation of a home burglar alarm, the latest TSA security measures, or the potential military invasion of another country. These things aren’t part of our evolutionary history; we have no natural sense of how to respond to them. Our fears are often calibrated wrong, and reason is the only way we can override them.
This essay first appeared on DarkReading.com.
Hotel fires are rare:
Fugitive caught after uploading his status on Facebook:
Six years of Microsoft Patch Tuesdays:
A computer card counter detects human card counters; all it takes is a computer that can track every card:
A woman posts a horrible story of how she was mistreated by the TSA, and the TSA responds by releasing the video showing she was lying.
Australia man receives reduced sentence due to encryption:
Steve Ballmer blames the failure of Windows Vista on security:
James Bamford on the NSA
Interesting story of a 2006 Wal-Mart hack from, probably, Minsk.
Ross Anderson has put together a great resource page on security and psychology:
The U.S. Deputy Director of National Intelligence for Collection gives a press conference on the new Utah data collection facility.
“Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation,” prepared for the US-China Economic and Security Review Commission, Northrop Grumman Corporation, October 9, 2009.
Squirrel terrorists attacking our critical infrastructure.
We have a cognitive bias to exaggerate risks caused by other humans, and downplay risks caused by animals (and, even more, by natural phenomena).
To aid their Wall Street investigations, the FBI used DCSNet, its massive surveillance system.
Detecting terrorists by smelling fear:
In the “Open Access Journal of Forensic Psychology”, there’s a paper about the problems with unscientific security: “A Call for Evidence-Based Security Tools”:
Mossad hacked a Syrian official’s computer; it was unattended in a hotel room at the time.
Remember the evil maid attack: if an attacker gets hold of your computer temporarily, he can bypass your encryption software.
Recently I wrote about the difficulty of making role-based access control work, and how research at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, “Laissez-Faire File Sharing,” tries to formalize that sort of access control.
I have refrained from commenting on the case against Najibullah Zazi, simply because it’s so often the case that the details reported in the press have very little do with reality. My suspicion was that he was, as in so many other cases, an idiot who couldn’t do any real harm and was turned into a bogeyman for political purposes. However, John Mueller—who I’ve written about before—has done the research.
Interesting research: “Countering Kernel Rootkits with Lightweight Hook Protection,” by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.
Airport thieves prefer stealing black luggage; it’s obvious why if you think about it.
We’ve seen lots of rumors about attacks against the power grid, both in the U.S. and elsewhere, of people hacking the power grid. President Obama mentioned it in his May cybersecurity speech: “In other countries cyberattacks have plunged entire cities into darkness.” Seems the source of these rumors has been Brazil.
FBI/CIA/NSA information sharing before 9/11:
Blowfish in fiction:
Recent stories have documented the ridiculous effects of zero-tolerance weapons policies in a Delaware school district: a first-grader expelled for taking a camping utensil to school, a 13-year-old expelled after another student dropped a pocketknife in his lap, and a seventh-grader expelled for cutting paper with a utility knife for a class project. Where’s the common sense? the editorials cry.
These so-called zero-tolerance policies are actually zero-discretion policies. They’re policies that must be followed, no situational discretion allowed. We encounter them whenever we go through airport security: no liquids, gels or aerosols. Some workplaces have them for sexual harassment incidents; in some sports a banned substance found in a urine sample means suspension, even if it’s for a real medical condition. Judges have zero discretion when faced with mandatory sentencing laws: three strikes for drug offences and you go to jail, mandatory sentencing for statutory rape (underage sex), etc. A national restaurant chain won’t serve hamburgers rare, even if you offer to sign a waiver. Whenever you hear “that’s the rule, and I can’t do anything about it”—and they’re not lying to get rid of you—you’re butting against a zero discretion policy.
These policies enrage us because they are blind to circumstance. Editorial after editorial denounced the suspensions of elementary school children for offenses that anyone with any common sense would agree were accidental and harmless. The Internet is filled with essays demonstrating how the TSA’s rules are nonsensical and sometimes don’t even improve security. I’ve written some of them. What we want is for those involved in the situations to have discretion.
However, problems with discretion were the reason behind these mandatory policies in the first place. Discretion is often applied inconsistently. One school principal might deal with knives in the classroom one way, and another principal another way. Your drug sentence could depend considerably on how sympathetic your judge is, or on whether she’s having a bad day.
Even worse, discretion can lead to discrimination. Schools had weapons bans before zero-tolerance policies, but teachers and administrators enforced the rules disproportionately against African-American students. Criminal sentences varied by race, too. The benefit of zero-discretion rules and laws is that they ensure that everyone is treated equally.
Zero-discretion rules also protect against lawsuits. If the rules are applied consistently, no parent, air traveler or defendant can claim he was unfairly discriminated against.
So that’s the choice. Either we want the rules enforced fairly across the board, which means limiting the discretion of the enforcers at the scene at the time, or we want a more nuanced response to whatever the situation is, which means we give those involved in the situation more discretion.
Of course, there’s more to it than that. The problem with the zero-tolerance weapons rules isn’t that they’re rigid, it’s that they’re poorly written.
What constitutes a weapon? Is it any knife, no matter how small? Should the penalties be the same for a first grader and a high school student? Does intent matter? When an aspirin carried for menstrual cramps becomes “drug possession,” you know there’s a badly written rule in effect.
It’s the same with airport security and criminal sentencing. Broad and simple rules may be simpler to follow—and require less thinking on the part of those enforcing them—but they’re almost always far less nuanced than our complex society requires. Unfortunately, the more complex the rules are, the more they’re open to interpretation and the more discretion the interpreters have.
The solution is to combine the two, rules and discretion, with procedures to make sure they’re not abused. Provide rules, but don’t make them so rigid that there’s no room for interpretation. Give the people in the situation—the teachers, the airport security agents, the policemen, the judges—discretion to apply the rules to the situation. But—and this is the important part—allow people to appeal the results if they feel they were treated unfairly. And regularly audit the results to ensure there is no discrimination or favoritism. It’s the combination of the four that work: rules plus discretion plus appeal plus audit.
All systems need some form of redress, whether it be open and public like a courtroom or closed and secret like the TSA. Giving discretion to those at the scene just makes for a more efficient appeals process, since the first level of appeal can be handled on the spot.
Zachary, the Delaware first grader suspended for bringing a combination fork, spoon and knife camping utensil to eat his lunch with, had his punishment unanimously overturned by the school board. This was the right decision; but what about all the other students whose parents weren’t as forceful or media-savvy enough to turn their child’s plight into a national story? Common sense in applying rules is important, but so is equal access to that common sense.
This essay originally appeared on the Minnesota Public Radio website.
A former soldier who handed a discarded shotgun in to police faces at least five years imprisonment for “doing his duty”.
In the past, our relationship with our computers was technical. We cared what CPU they had and what software they ran. We understood our networks and how they worked. We were experts, or we depended on someone else for expertise. And security was part of that expertise.
This is changing. We access our email via the web, from any computer or from our phones. We use Facebook, Google Docs, even our corporate networks, regardless of hardware or network. We, especially the younger of us, no longer care about the technical details. Computing is infrastructure; it’s a commodity. It’s less about products and more about services; we simply expect it to work, like telephone service or electricity or a transportation network.
Infrastructures can be spread on a broad continuum, ranging from generic to highly specialized. Power and water are generic; who supplies them doesn’t really matter. Mobile phone services, credit cards, ISPs, and airlines are mostly generic. More specialized infrastructure services are restaurant meals, haircuts, and social networking sites. Highly specialized services include tax preparation for complex businesses; management consulting, legal services, and medical services.
Sales for these services are driven by two things: price and trust. The more generic the service is, the more price dominates. The more specialized it is, the more trust dominates. IT is something of a special case because so much of it is free. So, for both specialized IT services where price is less important and for generic IT services—think Facebook—where there is no price, trust will grow in importance. IT is becoming a reputation-based economy, and this has interesting ramifications for security.
Some years ago, the major credit card companies became concerned about the plethora of credit-card-number thefts from sellers’ databases. They worried that these might undermine the public’s trust in credit cards as a secure payment system for the internet. They knew the sellers would only protect these databases up to the level of the threat to the seller, and not to the greater level of threat to the industry as a whole. So they banded together and produced a security standard called PCI. It’s wholly industry-enforced by an industry that realized its reputation was more valuable than the sellers’ databases.
A reputation-based economy means that infrastructure providers care more about security than their customers do. I realized this 10 years ago with my own company. We provided network-monitoring services to large corporations, and our internal network security was much more extensive than our customers’. Our customers secured their networks—that’s why they hired us, after all—but only up to the value of their networks. If we mishandled any of our customers’ data, we would have lost the trust of all of our customers.
I heard the same story at an ENISA conference in London last June, when an IT consultant explained that he had begun encrypting his laptop years before his customers did. While his customers might decide that the risk of losing their data wasn’t worth the hassle of dealing with encryption, he knew that if he lost data from one customer, he risked losing all of his customers.
As IT becomes more like infrastructure, more like a commodity, expect service providers to improve security to levels greater than their customers would have done themselves.
In IT, customers learn about company reputation from many sources: magazine articles, analyst reviews, recommendations from colleagues, awards, certifications, and so on. Of course, this only works if customers have accurate information. In a reputation economy, companies have a motivation to hide their security problems.
You’ve all experienced a reputation economy: restaurants. Some restaurants have a good reputation, and are filled with regulars. When restaurants get a bad reputation, people stop coming and they close. Tourist restaurants—whose main attraction is their location, and whose customers frequently don’t know anything about their reputation—can thrive even if they aren’t any good. And sometimes a restaurant can keep its reputation—an award in a magazine, a special occasion restaurant that “everyone knows” is the place to go—long after its food and service have declined.
The reputation economy is far from perfect.
This essay originally appeared in “The Guardian.”
I’m speaking at the Internet Governance Forum in Sharm el-Sheikh, Egypt, on November 16 and 17.
I’m speaking at the 2009 SecAU Security Congress in Perth on December 2 and 3.
I’m speaking at an Open Rights Group event in London on December 4.
I’m speaking at the First IEEE Workshop on Information Forensics and Security in London on December 8.
I’m speaking at the UCL Centre for Security and Crime Science in London on December 7.
I’m speaking at the Young Professionals in Foreign Policy in London on December 7.
I’m speaking at the Iberic Web Application Security Conference in Madrid on December 10.
December 10-11, 2009
Article on me from a Luxembourg magazine.
Interview with me on CNet.com:
Video interview with me, conducted at the Information Security Decisions conference in Chicago in October.
A month ago, ThatsMyFace.com approached me about making a Bruce Schneier action figure. It’s $100. I’d like to be able to say something like “half the proceeds are going to EPIC and EFF,” but they’re not. That’s the price for custom orders. I don’t even get a royalty. The company is working on lowering the price, and they’ve said that they’ll put a photograph of an actual example on the webpage. I’ve told them that at $100 no one will buy it, but at $40 it’s a funny gift for your corporate IT person. So e-mail the company if you’re interested, and if they get enough interest they’ll do a bulk order.
A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists demonstrated that it’s possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn’t even necessary to fabricate. In Charlie Stross’s novel “Halting State,” the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.
This kind of thing has been going on for ever. It’s an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.
Google, for example, has anti-fraud systems that detect and shut down advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.
Similarly, when Google started penalizing a site’s search engine rankings for having “bad neighbors”—backlinks from link farms, adult or gambling sites, or blog spam—people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors’ sites.
The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.
Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I’m sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?
Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard—leaving someone else’s fingerprints on a crime scene is hard, as is using a mask of someone else’s face to fool a guard watching a security camera—and sometimes it’s easy. But when automated systems are involved, it’s often very easy. It’s not just hardened criminals that try to frame each other, it’s mainstream commercial interests.
With systems that police internet comments and links, there’s money involved in commercial messages—so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there’s no end, really. Commercial speech is on the internet to stay; we can only hope that they don’t pollute the social systems we use so badly that they’re no longer useful.
This essay originally appeared in “The Guardian.”
“Smart Water” liquid identification:
Fabricating DNA evidence:
A divining rod to find explosives in Iraq:
Earlier this month, Joanna Rutkowska implemented the “evil maid” attack against TrueCrypt. The same kind of attack should work against any whole-disk encryption, including PGP Disk and BitLocker. Basically, the attack works like this:
Step 1: Attacker gains access to your shut-down computer and boots it from a separate volume. The attacker writes a hacked bootloader onto your system, then shuts it down.
Step 2: You boot your computer using the attacker’s hacked bootloader, entering your encryption key. Once the disk is unlocked, the hacked bootloader does its mischief. It might install malware to capture the key and send it over the Internet somewhere, or store it in some location on the disk to be retrieved later, or whatever.
You can see why it’s called the “evil maid” attack; a likely scenario is that you leave your encrypted computer in your hotel room when you go out to dinner, and the maid sneaks in and installs the hacked bootloader. The same maid could even sneak back the next night and erase any traces of her actions.
This attack exploits the same basic vulnerability as the “Cold Boot” attack from last year, and the “Stoned Boot” attack from earlier this year, and there’s no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off. From CRN: “Similar hardware-based attacks were among the main reasons why Symantec’s CTO Mark Bregman was recently advised by ‘three-letter agencies in the US Government’ to use separate laptop and mobile device when traveling to China, citing potential hardware-based compromise.”
PGP sums it up in their blog. “No security product on the market today can protect you if the underlying computer has been compromised by malware with root level administrative privileges. That said, there exists well-understood common sense defenses against ‘Cold Boot,’ ‘Stoned Boot.’ ‘Evil Maid,’ and many other attacks yet to be named and publicized.”
The defenses are basically two-factor authentication: a token you don’t leave in your hotel room for the maid to find and use. The maid could still corrupt the machine, but it’s more work than just storing the password for later use. Putting your data on a thumb drive and taking it with you doesn’t work; when you return you’re plugging your thumb into a corrupted machine.
The real defense here is trusted boot, something Trusted Computing is supposed to enable. And the only way to get that is from Microsoft’s BitLocker hard disk encryption, if your computer has a TPM module version 1.2 or later.
In the meantime, people who encrypt their hard drives, or partitions on their hard drives, have to realize that the encryption gives them less protection than they probably believe. It protects against someone confiscating or stealing their computer and then trying to get at the data. It does not protect against an attacker who has access to your computer over a period of time during which you use it, too.
Evil Maid attacks:
Cold Boot and Stoned Boot attacks:
This essay previously appeared in “Information Security Magazine,” as the second half of a point-counterpoint with Marcus Ranum. You can read his half here as well:
Security is never black and white. If someone asks, “For best security, should I do A or B?” the answer almost invariably is both. But security is always a trade-off. Often it’s impossible to do both A and B—there’s no time to do both, it’s too expensive to do both, or whatever—and you have to choose. In that case, you look at A and B and you make you best choice. But it’s almost always more secure to do both.
Yes, antivirus programs have been getting less effective as new viruses are more frequent and existing viruses mutate faster. Yes, antivirus companies are forever playing catch-up, trying to create signatures for new viruses. Yes, signature-based antivirus software won’t protect you when a virus is new, before the signature is added to the detection program. Antivirus is by no means a panacea.
On the other hand, an antivirus program with up-to-date signatures will protect you from a lot of threats. It’ll protect you against viruses, against spyware, against Trojans—against all sorts of malware. It’ll run in the background, automatically, and you won’t notice any performance degradation at all. And—here’s the best part—it can be free. AVG won’t cost you a penny. To me, this is an easy trade-off, certainly for the average computer user who clicks on attachments he probably shouldn’t click on, downloads things he probably shouldn’t download, and doesn’t understand the finer workings of Windows Personal Firewall.
Certainly security would be improved if people used whitelisting programs such as Bit9 Parity and Savant Protection—and I personally recommend Malwarebytes’ Anti-Malware—but a lot of users are going to have trouble with this. The average user will probably just swat away the “you’re trying to run a program not on your whitelist” warning message or—even worse—wonder why his computer is broken when he tries to run a new piece of software. The average corporate IT department doesn’t have a good idea of what software is running on all the computers within the corporation, and doesn’t want the administrative overhead of managing all the change requests. And whitelists aren’t a panacea, either: they don’t defend against malware that attaches itself to data files (think Word macro viruses), for example.
One of the newest trends in IT is consumerization, and if you don’t already know about it, you soon will. It’s the idea that new technologies, the cool stuff people want, will become available for the consumer market before they become available for the business market. What it means to business is that people—employees, customers, partners—will access business networks from wherever they happen to be, with whatever hardware and software they have. Maybe it’ll be the computer you gave them when you hired them. Maybe it’ll be their home computer, the one their kids use. Maybe it’ll be their cell phone or PDA, or a computer in a hotel’s business center. Your business will have no way to know what they’re using, and—more importantly—you’ll have no control.
In this kind of environment, computers are going to connect to each other without a whole lot of trust between them. Untrusted computers are going to connect to untrusted networks. Trusted computers are going to connect to untrusted networks. The whole idea of “safe computing” is going to take on a whole new meaning—every man for himself. A corporate network is going to need a simple, dumb, signature-based antivirus product at the gateway of its network. And a user is going to need a similar program to protect his computer.
Bottom line: antivirus software is neither necessary nor sufficient for security, but it’s still a good idea. It’s not a panacea that magically makes you safe, nor is it is obsolete in the face of current threats. As countermeasures go, it’s cheap, it’s easy, and it’s effective. I haven’t dumped my antivirus program, and I have no intention of doing so anytime soon.
Problems with anti-virus software:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.