February 15, 2011
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1102.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.
In this issue:
- Societal Security
- Domodedovo Airport Bombing
- Scareware: How Crime Pays
- Schneier News
- UK Immigration Officer Puts Wife on the No-Fly List
- Whitelisting vs. Blacklisting
Humans have a natural propensity to trust non-kin, even strangers. We do it so often, so naturally, that we don’t even realize how remarkable it is. But except for a few simplistic counterexamples, it’s unique among life on this planet. Because we are intelligently calculating and value reciprocity (that is, fairness), we know that humans will be honest and nice: not for any immediate personal gain, but because that’s how they are. We also know that doesn’t work perfectly; most people will be dishonest some of the time, and some people will be dishonest most of the time. How does society — the honest majority — prevent the dishonest minority from taking over, or ruining society for everyone? How is the dishonest minority kept in check? The answer is security — in particular, something I’m calling societal security.
I want to divide security into two types. The first is individual security. It’s basic. It’s direct. It’s what normally comes to mind when we think of security. It’s cops vs. robbers, terrorists vs. the TSA, Internet worms vs. firewalls. And this sort of security is as old as life itself or — more precisely — as old as predation. And humans have brought an incredible level of sophistication to individual security.
Societal security is different. At the tactical level, it also involves attacks, countermeasures, and entire security systems. But instead of A vs. B, or even Group A vs. Group B, it’s Group A vs. members of Group A. It’s security for individuals within a group from members of that group. It’s how Group A protects itself from the dishonest minority within Group A. And it’s where security really gets interesting.
There are many types — I might try to estimate the number someday — of societal security systems that enforce our trust of non-kin. They’re things like laws prohibiting murder, taxes, traffic laws, pollution control laws, religious intolerance, Mafia codes of silence, and moral codes. They enable us to build a society that the dishonest minority can’t exploit and destroy. Originally, these security systems were informal. But as society got more complex, the systems became more formalized, and eventually were embedded into technologies.
James Madison famously wrote: “If men were angels, no government would be necessary.” Government is just the beginning of what wouldn’t be necessary. Currency, that paper stuff that’s deliberately made hard to counterfeit, wouldn’t be necessary, as people could just keep track of how much money they had. Angels never cheat, so nothing more would be required. Door locks, and any barrier that isn’t designed to protect against accidents, wouldn’t be necessary, since angels never go where they’re not supposed to go. Police forces wouldn’t be necessary. Armies: I suppose that’s debatable. Would angels — not the fallen ones — ever go to war against one another? I’d like to think they would be able to resolve their differences peacefully. If people were angels, every security measure that isn’t designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.
Security isn’t just a tax on the honest; it’s a very expensive tax on the honest. It’s the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!
It wasn’t always like this. Security — especially societal security — used to be cheap. It used to be an incidental cost of society.
In a primitive society, informal systems are generally good enough. When you’re living in a small community, and objects are both scarce and hard to make, it’s pretty easy to deal with the problem of theft. If Alice loses a bowl, and at the same time, Bob shows up with an identical bowl, everyone knows Bob stole it from Alice, and the community can then punish Bob as it sees fit. But as communities get larger, as social ties weaken and anonymity increases, this informal system of theft prevention — detection and punishment leading to deterrence — fails. As communities get more technological and as the things people might want to steal get more interchangeable and harder to identify, it also fails. In short, as our ancestors made the move from small family groups to larger groups of unrelated families, and then to a modern form of society, the informal societal security systems started failing and more formal systems had to be invented to take their place. We needed to put license plates on cars and audit people’s tax returns.
We had no choice. Anything larger than a very primitive society couldn’t exist without societal security.
I’m writing a book about societal security. I will discuss human psychology: how we make security trade-offs, why we routinely trust non-kin (an evolutionary puzzle, to be sure), how the majority of us are honest, and that a minority of us are dishonest. That dishonest minority are the free riders of societal systems, and security is how we protect society from them. I will model the fundamental trade-off of societal security — individual self-interest vs. societal group interest — as a group prisoner’s dilemma problem, and use that metaphor to examine the basic mechanics of societal security. A lot falls out of this: free riders, the Tragedy of the Commons, the subjectivity of both morals and risk trade-offs.
Using this model, I will explore the security systems that protect — and fail to protect — market economics, corporations and other organizations, and a variety of national systems. I think there’s a lot we can learn about security by applying the prisoner’s dilemma model, and I’ve only recently started. Finally, I want to discuss modern changes to our millennia-old systems of societal security. The Information Age has changed a number of paradigms, and it’s not clear that our old security systems are working properly now or will work in the future. I’ve got a lot of work to do yet, and the final book might look nothing like this short outline. That sort of thing happens.
Tentative title: The Dishonest Minority: Security and its Role in Modern Society. I’ve written several books on the how of security. This book is about the why of security.
I expect to finish my first draft before Summer. Throughout 2011, expect to see bits from the book here. They might not make sense as a coherent whole at first — especially because I don’t write books in strict order — but by the time the book is published, it’ll all be part of a coherent and (hopefully) compelling narrative.
And if I write fewer extended blog posts and essays in the coming year, you’ll know why.
I haven’t written anything about the suicide bombing at Moscow’s Domodedovo Airport because I didn’t think there was anything to say. The bomber was outside the security checkpoint, in the area where family and friends wait for arriving passengers. From a security perspective, the bombing had nothing to do with airport security. He could have just as easily been in a movie theater, stadium, shopping mall, market, or anywhere else lots of people are crowded together with limited exits. The large death and injury toll indicates the bomber chose his location well.
I’ve often written that security measures that are only effective if the implementers guess the plot correctly are largely wastes of money — at best they would have forced this bomber to choose another target — and that our best security investments are intelligence, investigation, and emergency response. This latest terrorist attack underscores that even more. “Critics say” that the TSA couldn’t have detected this sort of attack. Of course; the TSA can’t be everywhere. And that’s precisely the point.
Many reporters asked me about the likely U.S. reaction. I don’t know; it could range from “Moscow is a long way off and that doesn’t concern us” to “Oh my god we’re all going to die!” The worry, of course, is that we will need to “do something,” even though there is no “something” that should be done.
I was interviewed by the Esquire politics blog about this. I’m not terribly happy with the interview; I was rushed and sloppy on the phone.
Me on terrorism security,
I wrote a lot last year about the assassination of Mahmoud al-Mabhouh in Dubai. There’s a new article by an Israeli investigative journalist that tells the story we already knew, and adds a bunch of interesting details. Well worth reading.
My older writings:
I’ve also written a lot about Stuxnet. This long New York Times article includes some interesting revelations. The article claims that Stuxnet was a joint Israeli-American project, and that its effectiveness was tested on live equipment: “Behind Dimona’s barbed wire, the experts say, Israel has spun nuclear centrifuges virtually identical to Iran’s at Natanz, where Iranian scientists are struggling to enrich uranium.”
My older writings:
And an alternate theory: the Chinese did it.
More opinions on Stuxnet:
This would make a great movie: “Rep. Dan Burton, R-Ind., renewed his call for the installation of an impenetrable, see-through security shield around the viewing gallery overlooking the House floor. Burton points out that, while guns and some bombs would be picked up by metal detectors, a saboteur could get into the Capitol concealing plastic explosives.”
This is a story about an odd art forger who is not in it for the money. I wonder if his art will be famous someday.
Last month, the U.S. Supreme Court heard arguments about whether or not corporations have the same rights to “personal privacy” that individuals do. This is a good analysis of the case.
I signed on to a “friend of the court” brief put together by EPIC, arguing that they do not.
More background here.
An editorial from The Washington Post.
And here’s a much more entertaining take on the issue.
A cost-benefit analysis of full-body scanners, by Mark Stewart and John Mueller:
Response from Mark Stewart to some of the comments on my blog:
Paper on the legality of the CA trust model:
Matt Blaze on CAs:
A new report from the OECD says the threat of cyberwar has been grossly exaggerated. There are lots of news articles.
Also worth reading is this article on cyberwar hype and how it isn’t serving our national interests, with some good policy guidelines.
Me on cyberwar:
This safecracking robot tries every possible combination, one after another. Through some clever reductions of the combination space, opening the safe took “just a few hours.”
Along the same lines, here’s a Lego robot that cracks combination locks.
I wrote about another, non-Lego, brute-force combination lock cracker a few years ago.
The original link is broken, but the project is here.
In this video, champion safecracker Jeff Sitar opens a similar safe by feel and sound in just 5 minutes and 19 seconds.
At the Black Hat conference lasts week, Jamie Schwettmann and Eric Michaud presented some great research on hacking tamper-evident seals.
It’s amazing how many security cameras are on the Internet, accessible by anyone. And it’s not just for viewing; a lot of these cameras can be reprogrammed by anyone.
This site lists Google search terms to find cameras, as does the comments section in this Slashdot story.
According to this study, REAL-ID has not only been cheaper to implement than the states estimated, but also helpful in reducing fraud. This might be the first government IT project ever that came in under initial cost estimates. Perhaps the reason is that the states did not want to implement REAL-ID in 2005, so they overstated the costs. As to fraud reduction — I’m not so sure. As the difficulty of getting a fraudulent ID increases, so does its value. I think we’ll have to wait a while longer and see how criminals adapt.
CATO’s Jim Harper argues that this report does not show that implementing the national ID program envisioned in the national ID law is a cost-effective success. It only assesses compliance with certain DHS-invented “benchmarks” related to REAL ID, and does so in a way that skews the results.
This is a bit surreal: security theater in the theater.
Security theater, illustrated.
An undercover TSA agent successfully bribed a JetBlue ticket agent to check a suitcase under a random passenger’s name and put it on an airplane. As with a lot of these tests, I’m not that worried because it’s not a reliable enough tactic to build a plot around. But untrustworthy airline personnel — or easily bribable airline personal — could be used in a smarter and less risky plot.
It’s only a proof of concept, but it’s scary nonetheless. It’s a Trojan for Android phones that looks for credit card numbers, either typed or spoken, and relays them back to its controller. Section 7.2 of the research paper describes some defenses, but I’m not really impressed by any of them.
The Seattle man who refused to show ID to the TSA and recorded the whole incident has been cleared of all charges.
A recent Dilbert comic about the TSA.
I wrote an op-ed for CNN.com on the demise of the color-coded terrorist threat level system. It’s nothing I haven’t said before, so I won’t reprint it here.
The best thing about the system was the jokes it inspired late-night comedians, and others, to make. In memoriam, I asked my blog readers to post their favorites.
My previous essays on the topic:
This is the first piece of writing I’ve seen from Kip Hawley since he left the TSA in 2009. It’s about the Domodedovo Airport bombing, but it’s mostly generalities and platitudes.
Hacking HTTP status codes, one website can learn if you’re logged into other websites.
This is a clever development in ATM skimming technology. It’s a skimmer that attaches to the ATM-room door lock, not the ATM itself. Combined with a hidden camera, it’s an ATM skimmer that requires no modification to the ATM.
Sensible comment on terrorist targets of choice:
I’d never heard the term “micromort” before. It’s a probability: a one-in-a-million probability of death. For example, one-micromort activities are “travelling 230 miles (370 km) by car (accident),” and “living 2 days in New York or Boston (air pollution).”
I don’t know if that data is accurate; it’s from the Wikipedia entry. In any case, I think it’s a useful term.
I was interviewed for a story on a mouse-powered explosives detector. Animal senses are better than any detection machine current technology can build, which makes it a good idea. But the challenges of using animals in this sort of situation are considerable. The neat thing about the technology profiled in the article, which the article didn’t make as clear as I would have liked, is how far it goes in making the mice just another interchangeable part in the system. They’re encased in cartridges, which can be swapped in and out of the system. They don’t need regular handling. If we are ever going to see animals in a mass-produced system, it’s going to look something like this.
Design failure means you can pick winning scratch lottery tickets before scratching the coatings off. Most interesting is that there’s statistical evidence that this sort of attack has been occurring in the wild: not necessarily this particular attack, but some way to separate winners from losers without voiding the tickets.
Since the above article was published in Wired, another technique of hacking scratch lottery tickets has surfaced: store clerks capitalizing on losing streaks. If you assume any given package of lottery tickets has a similar number of winners, wait until you sell most of the way through the packet without seeing those winners and then buy the rest.
How feed-over-email circumvents Chinese censorship.
Julian Sanchez on balancing privacy and security.
I’ve written about the false trade-off between security and privacy.
It amazes me that credit card fraud is so easy that you can run it from prison.
Roger Grimes has an article describing “the seven types of malicious hackers.” I generally like taxonomies, and this one is pretty good.
A group of students at the Chinese University in Hong Kong have figured out how to store data in bacteria. The article talks about how secure it is, and the students even coined the term “bioencryption,” but I don’t see any encryption. It’s just storage.
In another article, one of the researchers claims: “Bacteria can’t be hacked.”
Why can’t bacteria be hacked? If the storage system is attached to a network, it’s just as vulnerable as anything else attached to a network. And if it’s disconnected from any network, then it’s just as secure as anything else disconnected from a network. The problem the U.S. diplomats had was authorized access to the WikiLeaks cables by someone who decided to leak them. No cryptography helps against that.
There is cryptography in the project: “In addition we have created an encryption module with the R64 Shufflon-Specific Recombinase to further secure the information.”
If the group is smart, this will be some conventional cryptography algorithm used to encrypt the data before it is stored on the bacteria.
In any case, this is fascinating and interesting work. I just don’t see any new form of encryption, or anything inherently unhackable.
Scareware is fraudulent software that uses deceptive advertising to trick users into believing they’re infected with some variety of malware, then convinces them to pay money to protect themselves. The infection isn’t real, and the software they buy is fake, too. It’s all a scam.
One scareware operator sold “more than 1 million software products” at “$39.95 or more,” and now has to pay $8.2 million to settle a Federal Trade Commission complaint.
Seems to me that $40 per customer, minus $8.20 to pay off the FTC, is still a pretty good revenue model. Their operating costs can’t be very high, since the software doesn’t actually do anything. Yes, a court ordered them to close down their business, but certainly there are other creative entrepreneurs that can recognize a business opportunity when they see it.
I am speaking at the RSA Conference on February 16 in San Francisco. In the morning, I’ll be speaking about societal security and the dishonest minority. In the afternoon, I’ll be on a panel on cyberwar.
I am speaking at the AAAS annual meeting, on a panel entitled “Promoting Security and Sustaining Privacy: How Do We Find the Right Balance?” on February 19 in Washington, DC.
I am keynoting the 4th Annual CSO Roundtable Spring Conference on March 14 in Alexandria, VA.
I am speaking at Security Summit 2011 on March 15 in Milan.
This screen shot is from the movie “Good Time Max.” 17 minutes and 52 seconds into the movie, it shows Blowfish being used as an encryption algorithm.
A UK immigration officer decided to get rid of his wife by putting her on the no-fly list, ensuring that she could not return to the UK from abroad. This worked for three years, until he put in for a promotion and — during the routine background check — someone investigated why his wife was on the no-fly list.
Okay, so he’s an idiot. And a bastard. But the real piece of news here is how easy it is for a UK immigration officer to put someone on the no-fly list with *absolutely no evidence* that that person belongs there. And how little auditing is done on that list. Once someone is on, they’re on for good.
That’s simply no way to run a free country.
The whitelist/blacklist debate is far older than computers, and it’s instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier — although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t–but because it’s a security system that can be implemented automatically, without people.
To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino’s black book or the more general Griffin book. Some retail stores have the same model — a Google search on “banned from Wal-Mart” results in 1.5 million hits, including Megan Fox — although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?
National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.
Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist — the software can do it largely for free.
Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn’t make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you’re often limited to an Internet browser and a few common business applications.
Lately, we’re seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it’s being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you’re using?
Turns out that many people do. Apple’s control over its apps hasn’t seemed to hurt iPhone sales, and Facebook’s control over its apps hasn’t seemed to affect Facebook’s user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.
For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back — perhaps with a whitelist we maintain personally, but more probably with a blacklist.
This essay previously appeared in “Information Security” as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus’s half there as well.
The Griffin Book:
Manufacturers controlling economic environment on their systems:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2011 by Bruce Schneier.