Crypto-Gram

March 15, 2012

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1203.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.


In this issue:


“Liars and Outliers”: The Big Idea

My big idea is a big question. Every cooperative system contains parasites. How do we ensure that society’s parasites don’t destroy society’s systems?

It’s all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society. I trust airline pilots, hotel clerks, ATMs, restaurant kitchens, and the company that built the computer I’m writing this short essay on. I trust that they have acted and will act in the ways I expect them to. This type of trust is more a matter of consistency or predictability than of intimacy.

Of course, all of these systems contain parasites. Most people are naturally trustworthy, but some are not. There are hotel clerks who will steal your credit card information. There are ATMs that have been hacked by criminals. Some restaurant kitchens serve tainted food. There was even an airline pilot who deliberately crashed his Boeing 767 into the Atlantic Ocean in 1999.

My central metaphor is the Prisoner’s Dilemma, which nicely exposes the tension between group interest and self-interest. And the dilemma even gives us a terminology to use: cooperators act in the group interest, and defectors act in their own selfish interest, to the detriment of the group. Too many defectors, and everyone suffers—often catastrophically.

The Prisoner’s Dilemma is not only useful in describing the problem, but also serves as a way to organize solutions. We humans have developed four basic mechanisms for ways to limit defectors: what I call societal pressure. We use morals, reputation, laws, and security systems. It’s all coercion, really, although we don’t call it that. I’ll spare you the details; it would require a book to explain. And it did.

This book marks another chapter in my career’s endless series of generalizations. From mathematical security—cryptography—to computer and network security; from there to security technology in general; then to the economics of security and the psychology of security; and now to—I suppose—the sociology of security. The more I try to understand how security works, the more of the world I need to encompass within my model.

When I started out writing this book, I thought I’d be talking a lot about the global financial crisis of 2008. It’s an excellent example of group interest vs. self-interest, and how a small minority of parasites almost destroyed the planet’s financial system. I even had a great quote by former Federal Reserve Chairman Alan Greenspan, where he admitted a “flaw” in his worldview. The exchange, which took place when he was being questioned by Congressman Henry Waxman at a 2008 Congressional hearing, was once the opening paragraphs of my book. I called the defectors “the dishonest minority,” which was my original title.

That unifying example eventually faded into the background, to be replaced by a lot of separate examples. I talk about overfishing, childhood immunizations, paying taxes, voting, stealing, airplane security, gay marriage, and a whole lot of other things. I dumped the phrase “dishonest minority” entirely, partly because I didn’t need it and partly because a vocal few early readers were reading it not as “the small percentage of us that are dishonest” but as “the minority group that is dishonest”—not at all the meaning I was trying to convey.

I didn’t even realize I was talking about trust until most of the way through. It was a couple of early readers who—coincidentally, on the same day—told me my book wasn’t about security, it was about trust. More specifically, it was about how different societal pressures, security included, induce trust. This interplay between cooperators and defectors, trust and security, compliance and coercion, affects everything having to do with people.

In the book, I wander through a dizzying array of academic disciplines: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archeology, history, political science, law, philosophy, theology, cognitive science, and computer security. It sometimes felt as if I were blundering through a university, kicking down doors and demanding answers. “You anthropologists: what can you tell me about early human transgressions and punishments?” “Okay neuroscientists, what’s the brain chemistry of cooperation? And you evolutionary psychologists, how can you explain that?” “Hey philosophers, what have you got?” I downloaded thousands—literally—of academic papers. In pre-Internet days I would have had to move into an academic library.

What’s really interesting to me is what this all means for the future. We’ve never been able to eliminate defections. No matter how much societal pressure we bring to bear, we can’t bring the murder rate in society to zero. We’ll never see the end of bad corporate behavior, or embezzlement, or rude people who make cell phone calls in movie theaters. That’s fine, but it starts getting interesting when technology makes each individual defection more dangerous. That is, fisherman will survive even if a few of them defect and overfish—until defectors can deploy driftnets and single-handedly collapse the fishing stock. The occasional terrorist with a machine gun isn’t a problem for society in the overall scheme of things; but a terrorist with a nuclear weapon could be.

Also—and this is the final kicker—not all defectors are bad. If you think about the notions of cooperating and defecting, they’re defined in terms of the societal norm. Cooperators are people who follow the formal or informal rules of society. Defectors are people who, for whatever reason, break the rules. That definition says nothing about the absolute morality of the society or its rules. When society is in the wrong, it’s defectors who are in the vanguard for change. So it was defectors who helped escaped slaves in the antebellum American South. It’s defectors who are agitating to overthrow repressive regimes in the Middle East. And it’s defectors who are fueling the Occupy Wall Street movement. Without defectors, society stagnates.

We simultaneously need more societal pressure to deal with the effects of technology, and less societal pressure to ensure an open, free, and evolving society. This is our big challenge for the coming decade.

This essay originally appeared on John Scalzi’s blog, Whatever.
http://whatever.scalzi.com/2012/02/16/…


“Liars and Outliers”: Interview on The Browser

I was asked to talk about five books related to privacy.

Q. You’re best known as a security expert but our theme today is “trust.” How would you describe the connection between the two?

A. Security exists to facilitate trust. Trust is the goal, and security is how we enable it. Think of it this way: As members of modern society, we need to trust all sorts of people, institutions and systems. We have to trust that they’ll treat us honestly, won’t take advantage of us and so on—in short, that they’ll behave in a trustworthy manner. Security is how we induce trustworthiness, and by extension enable trust.

An example might make this clearer. For commerce to work smoothly, merchants and customers need to trust each other. Customers need to trust that merchants won’t misrepresent the goods they’re selling. Merchants need to trust that customers won’t steal stuff without paying. Each needs to trust that the other won’t cheat somehow. Security is how we make that work, billions of times a day. We do that through obvious measures like alarm systems that prevent theft and anti-counterfeiting measures in currency that prevent fraud, but I mean a lot of other things as well. Consumer protection laws prevent merchants from cheating. Other laws prevent burglaries. Less formal measures like reputational considerations help keep merchants, and customers in less anonymous communities, from cheating. And our inherent moral compass keeps most of us honest most of the time.

In my new book “Liars and Outliers,” I call these societal pressures. None of them are perfect, but all of them—working together—are what keeps society functioning. Of course there is, and always will be, the occasional merchant or customer who cheats. But as long as they’re rare enough, society thrives.

Q. How has the nature of trust changed in the information age?

A. These notions of trust and trustworthiness are as old as our species. Many of the specific societal pressures that induce trust are as old as civilisation. Morals and reputational considerations are certainly that old, as are laws. Technical security measures have changed with technology, as well as details around reputational and legal systems, but by and large they’re basically the same.

A. What has changed in modern society is scale. Today we need to trust more people than ever before, further away—whether politically, ethnically or socially—than ever before. We need to trust larger corporations, more diverse institutions and more complicated systems. We need to trust via computer networks. This all makes trust, and inducing trust, harder. At the same time, the scaling of technology means that the bad guys can do more damage than ever before. That also makes trust harder. Navigating all of this is one of the most fundamental challenges of our society in this new century.

Q. Given the dangers out there, should we trust anyone? Isn’t “trust no one” the first rule of security?

A. It might be the first rule of security, but it’s the worst rule of society. I don’t think I could even total up all the people, institutions and systems I trusted today. I trusted that the gas company would continue to provide the fuel I needed to heat my house, and that the water coming out of my tap was safe to drink. I trusted that the fresh and packaged food in my refrigerator was safe to eat—and that certainly involved trusting people in several countries. I trusted a variety of websites on the Internet. I trusted my automobile manufacturer, as well as all the other drivers on the road.

A. I am flying to Boston right now, so that requires trusting several major corporations, hundreds of strangers—either working for those corporations, sitting on my plane or just standing around in the airport—and a variety of government agencies. I even had to trust the TSA [US Transportation Security Administration], even though I know it’s doing a lousy job—and so on. And it’s not even 9:30am yet! The number of people each of us trusts every day is astounding. And we trust them so completely that we often don’t even think about it.

We don’t walk into a restaurant and think: “The food vendors might have sold the restaurant tainted food, the cook might poison it, the waiter might clone my credit card, other diners might steal my wallet, the building constructor might have weakened the roof, and terrorists might bomb the place.: We just sit down and eat. And the restaurant trusts that we won’t steal anyone else’s wallet or leave a bomb under our chair, and will pay when we’re done. Without trust, society collapses. And without societal pressures, there’s no trust. The devil is in the details, of course, and that’s what my book is about.

Q. As an individual, what security threats scare you the most?

A. My primary concerns are threats from the powerful. I’m not worried about criminals, even organised crime. Or terrorists, even organised terrorists. Those groups have always existed, always will, and they’ll always operate on the fringes of society. Societal pressures have done a good job of keeping them that way. It’s much more dangerous when those in power use that power to subvert trust. Specifically, I am thinking of governments and corporations.

Let me give you a few examples. The global financial crisis was not a result of criminals, it was perpetrated by legitimate financial institutions pursuing their own self-interest. The major threats against our privacy are not from criminals, they’re from corporations trying to more accurately target advertising. The most significant threat to the freedom of the Internet is from large entertainment companies, in their misguided attempt to stop piracy. And the cyberwar rhetoric is likely to cause more damage to the Internet than criminals could ever dream of.

What scares me the most is that today, in our hyper-connected, hyper-computed, high-tech world, we will get societal pressures wrong to catastrophic effect.

Q. Let’s get stuck into the books you’ve chosen on this theme on trust. Beginning with Yochai Benkler’s “The Penguin and the Leviathan.”

This could be considered a companion book to my own. I write from the perspective of security—how society induces cooperation. Benkler takes the opposite perspective—how does this cooperation work and what is its value? More specifically, what is its value in the 21st century information-age economy? He challenges the pervasive economic view that people are inherently selfish creatures, and shows that actually we are naturally cooperative. More importantly, he discusses the enormous value of cooperation in society, and the new ways it can be harnessed over the Internet.

I think this view is important. Our culture is pervaded with the idea that individualism is paramount—Thomas Hobbes’s notion that we are all autonomous individuals who willingly give up some of our freedom to the government in exchange for safety. It’s complete nonsense. Humans have never lived as individuals. We have always lived in communities, and we have always succeeded or failed as cooperative groups. The fact that people who separate themselves and live alone—think of Henry David Thoreau in Walden—is so remarkable indicates how rare it is.

Benkler understands this, and wants us to accept the cooperative nature of ourselves and our societies. He also gives the same advice for the future that I do—that we need to build social mechanisms that encourage cooperation over control. That is, we need to facilitate trust in society.
Q. What’s next on your list?

“The Folly of Fools,: by the biologist Robert Trivers. Trivers has studied self-deception in humans, and asks how it evolved to be so pervasive. Humans are masters at self-deception. We regularly deceive ourselves in a variety of different circumstances. But why? How is it possible for self-deception—perceiving reality to be different than it really is—to have survival value? Why is it that genetic tendencies for self-deception are likely to propagate to the next generation?

Trivers’s book-long answer is fascinating. Basically, deception can have enormous evolutionary benefits. In many circumstances, especially those involving social situations, individuals who are good at deception are better able to survive and reproduce. And self-deception makes us better at deception. For example, there is value in my being able to deceive you into thinking I am stronger than I really am. You’re less likely to pick a fight with me, I’m more likely to win a dominance struggle without fighting, and so on. I am better able to bluff you if I actually believe I am stronger than I really am. So we deceive ourselves in order to be better able to deceive others.

The psychology of deception is fundamental to my own writing on trust. It’s much easier for me to cheat you if you don’t believe I am cheating you.

Q. Third up, “The Murderer Next Door” by David M Buss.

A. There have been a number of books about the violent nature of humans, particularly men. I chose “The Murderer Next Door” both because it is well-written and because it is relatively new, published in 2005. David M Buss is a psychologist, and he writes well about the natural murderousness of our species. There’s a lot of data to support natural human murderousness, and not just murder rates in modern societies. Anthropological evidence indicates that between 15% and 25% of prehistoric males died in warfare.

This murderousness resulted in an evolutionary pressure to be clever. Here’s Buss writing about it: “As the motivations to murder evolved in our minds, a set of counterinclinations also developed. Killing is a risky business. It can be dangerous and inflict horrible costs on the victim. Because it’s so bad to be dead, evolution has fashioned ruthless defences to prevent being killed, including killing the killer. Potential victims are therefore quite dangerous themselves. In the evolutionary arms race, homicide victims have played a critical and unappreciated role—they pave the way for the evolution of anti-homicide defences.”

Q. Your fourth book is by psychologist, science writer and previous FiveBooks interviewee Steven Pinker.

A. “The Better Angels of Our Nature” is Steven Pinker’s explanation as to why, despite the selection pressures for murderousness in our evolutionary past, violence has declined in so many cultures around the world. It’s a fantastic book, and I recommend that everyone read it. From my perspective, I could sum up his argument very simply: Societal pressures have worked.

Of course it’s more complicated than that, and Pinker does an excellent job of leading the reader through his analysis and conclusions. First, he spends six chapters documenting the fact that violence has in fact declined. In the next two chapters, he does his best to figure out exactly what has caused the “better angels of our nature” to prevail over our more natural demons. His answers are complicated, and expand greatly on the interplay among the various societal pressures which I talk about myself. It’s not things like bigger jails and more secure locks that are making society safer. It’s things like the invention of printing and the resultant rise of literacy, the empowerment of women and the rise of universal moral and ethical principles.

“Braintrust,” by the neuroscientist Patricia Churchland. This book is about the neuroscience of morality. It’s brand new—published in 2011—which is good because this is a brand new field of science, and new discoveries are happening all the time. Morality is the most basic of societal pressures, and Churchland explains how it works.

This book tries to understand the neuroscience behind trust and trustworthiness. In her own words: “The hypothesis on offer is that what we humans call ethics or morality is a four dimensional scheme for social behavior that is shaped by interlocking brain processes: (1) caring (rooted in attachment to kin and kith and care for their well-being), (2) recognition of other’s psychological states (rooted in the benefits of predicting the behavior of others) (3) problem-solving in a social context (e.g., how we should distribute scarce goods, settle land disputes; how we should punish the miscreants) and (4) learning social practices (by positive and negative reinforcement, by imitation, by trial and error, by various kinds of conditioning, and by analogy).”

Those are our innate human societal pressures. They are the security systems that keep us mostly trustworthy most of the time—enough for most of us to be trusting enough for society to survive.

Q. Are we safer for all the security theatre of airport checks?

Of course not. There are two parts to the question. One: Are we doing the right thing? That is, does it make sense for America to focus its anti-terrorism security efforts on airports and airplanes? And two: Are we doing things right? In other words, are the anti-terrorism measures at airports doing the job and preventing terrorism? I say the answer to both of those questions is no. Focusing on airports, and specific terrorist tactics like shoes and liquids, is a poor use of our money because it’s easy for terrorists to switch targets and tactics. And the current TSA security measures don’t keep us safe because it’s too easy to bypass them.

There are two basic kinds of terrorists—random idiots and professionals. Pretty much any airport security, even the pre-9/11 measures, will protect us against random idiots. They will get caught. And pretty much nothing will protect us against professionals. They’ve researched our security and know the weaknesses. By the time the plot gets to the airport, it’s too late. Much more effective is for the US to spend its money on intelligence, investigation and emergency response. But this is a shorter answer than your readers deserve, and I suggest they read more of my writings on the topic.

Q. How does the rise of cloud computing affect personal risk?

Like everything else, cloud computing is all about trust. Trust isn’t new in computing. I have to trust my computer’s manufacturer. I have to trust my operating system and software. I have to trust my Internet connection and everything associated with that. I have to trust all sorts of data I receive from other sources.

So on the one hand, cloud computing just adds another level of trust. But it’s an important level of trust. For most of us, it reduces our risk. If I have my email on Google, my photos on Flickr, my friends on Facebook and my professional contacts on LinkedIn, then I don’t have to worry much about losing my data. If my computer crashes I’ll still have all my email, photos and contacts. This is the way the iPhone works with iCloud—if I lose my phone, I can get a new one and all my data magically reappears.

On the other hand, I have to trust my cloud providers. I have to trust that Facebook won’t misuse the personal information it knows about me. I have to trust that my data won’t get shipped off to a server in a foreign country with lax privacy laws, and that the companies who have my data will not hand it over to the police without a court order. I’m not able to implement my own security around my data; I have to take what the cloud provider offers. And I must trust that’s good enough, often without knowing anything about it.

Q. Finally, how many Bruce Schneier Facts are true?

A. Seven.

This Q&A originally appeared on TheBrowser.com.
http://thebrowser.com/interviews/…


“Liars and Outliers” Update

The book is selling well. (Signed copies are still available on the website.) All the online stores have it, and most bookstores as well. It is available in Europe and elsewhere outside the U.S. And for those who want a DRM-free electronic copy, it’s available at the O’Reilly online bookstore for $11.99.

The book’s webpage:
http://www.schneier.com/book-lo.html

Ordering a signed copy:
http://www.schneier.com/book-lo.html#signed

OReilly.com webpage:
http://shop.oreilly.com/product/9781118143308.do

Last month, I linked to a bunch of book reviews. Here are more:
http://www.versvs.net/anotacion/…
http://nakedsecurity.sophos.com/2012/02/17/…
http://www.computerweekly.com/s/david_lacey/…
http://books.slashdot.org/story/12/02/22/1955201/…
http://econlog.econlib.org/archives/2012/03/…
http://findwhatworks.wordpress.com/2012/02/29/…
http://epic.org/alert/epic_alert_1904.html
http://nyjournalofbooks.com/review/…
http://www.computerweekly.com/s/david_lacey/…
https://apapadop.wordpress.com/2012/03/11/…
http://groups.yahoo.com/group/techbooks/message/888

And here are some audio and video interviews about the book:
http://minnesota.publicradio.org/display/web/2012/…
http://threatpost.com/en_us/s/…
http://www.youtube.com/watch?…
http://searchsecurity.techtarget.com/video/…
I take the “Page 99 Test”:
http://page99test.blogspot.com/2012/02/…
Gizmodo published an except from Chapter 17.
http://gizmodo.com/5887528/…
Chapter 1 is available on my website.
http://www.schneier.com/book-lo-chapter1.html

And here are all the figures, mostly for people reading the e-book.
http://www.schneier.com/book-lo-figures.html


Lousy Random Numbers Cause Insecure Public Keys

There’s some excellent research surveying public keys in the wild. Basically, the researchers found that a small fraction of them (27,000 out of 7.1 million, or 0.38%) share a common factor and are inherently weak. The researchers can break those public keys, and anyone who duplicates their research can as well.

The cause of this is almost certainly a lousy random number generator used to create those public keys in the first place. This shouldn’t come as a surprise. One of the hardest parts of cryptography is random number generation. It’s really easy to write a lousy random number generator, and it’s not at all obvious that it is lousy. Randomness is a non-functional requirement, and unless you specifically test for it—and know *how* to test for it—you’re going to think your cryptosystem is working just fine. (One of the reporters who called me about this story said that the researchers told him about a real-world random number generator that produced just seven different random numbers.) So it’s likely these weak keys are accidental.

It’s certainly possible, though, that some random number generators have been deliberately weakened. The obvious culprits are national intelligence services like the NSA. I have no evidence that this happened, but if I were in charge of weakening cryptosystems in the real world, the first thing I would target is random number generators. They’re easy to weaken, and it’s hard to detect that you’ve done anything. Much safer than tweaking the algorithms, which can be tested against known test vectors and alternate implementations. But again, I’m just speculating here.

What is the security risk? There’s some, but it’s hard to know how much. We can assume that the bad guys can replicate this experiment and find the weak keys. But they’re random, so it’s hard to know how to monetize this attack. Maybe the bad guys will get lucky and one of the weak keys will lead to some obvious way to steal money, or trade secrets, or national intelligence. Maybe.

And what happens now? My hope is that the researchers know which implementations of public-key systems are susceptible to these bad random numbers—they didn’t name names in the paper—and alerted them, and that those companies will fix their systems. (I recommend my own Fortuna, from “Cryptography Engineering.”) I hope that everyone who implements a home-grown random number generator will rip it out and put in something better. But I don’t hold out much hope. Bad random numbers have broken a lot of cryptosystems in the past, and will continue to do so in the future.

Paper:
http://eprint.iacr.org/2012/064.pdf

The title of the paper, “Ron was wrong, Whit is right,” refers to the fact that RSA is inherently less secure because it needs two large random primes. Discrete log based algorithms, like DSA and ElGamal, are less susceptible to this vulnerability because they only need one random prime.

News:
http://www.nytimes.com/2012/02/15/technology/…
http://arstechnica.com/business/news/2012/02/…
http://www.newscientist.com/s/onepercent/2012/…
http://www.theregister.co.uk/2012/02/21/…
http://www.businessweek.com/articles/2012-02-16/…
Fortuna:
http://en.wikipedia.org/wiki/Fortuna_PRNG


Video Shows TSA Full-Body Scanner Failure

The Internet is buzzing about this video, showing a blogger walking through two different types of full-body scanners with metal objects. Basically, by placing the object on your side, the black image is hidden against the scanner’s black background. This isn’t new, by the way. This vulnerability was discussed in a paper published last year by the Journal of Transportation Security. And there’s a German TV news segment from 2010 that shows someone sneaking explosives past a full-body scanner.

The TSA’s response is pretty uninformative. I’d include a quote, but it really doesn’t say anything. And the original blogger is now writing that the TSA is pressuring journalists not to cover the story.

These full-body scanners have been a disaster since they’ve been introduced. But, as I wrote in 2010, I don’t think the TSA will back down. It would be too embarrassing if they did.

Video:
http://tsaoutofourpants.wordpress.com/2012/03/06/…
Reports:
http://www.theatlantic.com/technology/archive/2012/…
http://www.nakedcapitalism.com/2012/03/…
http://www.metafilter.com/113613/…
http://www.dailytech.com/article.aspx?newsid=24179

Journal of Transportation Security paper:
http://www.springerlink.com/content/…

German TV segment:
https://www.schneier.com/blog/archives/2010/01/…

TSA’s response:
http://.tsa.gov/2012/03/…

TSA pressuring the media:
http://tsaoutofourpants.wordpress.com/2012/03/08/…
My 2010 essay:
http://www.schneier.com/essay-333.html


News

Cryptanalysis of the satellite phone encryption algorithms GMR-1 and GMR-2:
http://aktuell.ruhr-uni-bochum.de/pm2012/…
http://pda.physorg.com/news/…
http://www.telegraph.co.uk/technology/news/9058529/…
http://www.networkworld.com/news/2012/…
Self-domestication happens when the benefits of cooperation outweigh the costs. Here’s how it works in bonobos and other animals.
http://www.wired.com/wiredscience/2012/02/…
This is the sort of thing I write about in my new book. And with both bonobos and humans, there’s an obvious security problem: if almost everyone is non-aggressive, an aggressive minority can easily dominate. How does society prevent that from happening?

Funny comic: what is a suspicious-looking package, anyway?
http://www.gocomics.com/fminus/2012/02/08

Covert communications channel in tarsiers:
http://rsbl.royalsocietypublishing.org/content/…
Research paper: “A birthday present every eleven wallets? The security of customer-chosen banking PINs.” It turns out that “1234” and birthdays are the most common PINs.
http://www.cl.cam.ac.uk/~jcb82/doc/…
http://www.lightbluetouchpaper.org/2012/02/20/…
http://bits.blogs.nytimes.com/2012/02/20/…
John Nash’s fascinating 1955 letter to the NSA:
http://agtb.wordpress.com/2012/02/17/…

According to a report by Juniper, mobile malware is increasing dramatically. I don’t think this is surprising at all. Mobile is the new platform. Mobile is a very intimate platform. It’s where the attackers are going to go.
http://forums.juniper.net/t5/Security-Mobility-Now/…
http://allthingsd.com/20120215/…
We can now conclusively link Stuxnet to the centrifuge structure at the Natanz nuclear enrichment lab in Iran. Watch this new video presentation from Ralph Langner, the researcher who has done the most work on Stuxnet. It’s a long clip, but the good stuff is between 21:00 and 29:00.
http://www.digitalbond.com/2012/01/31/…
The pictures he’s referring to are still up.
http://www.president.ir/en/9172
My previous writings on Stuxnet.
https://www.schneier.com/blog/archives/2010/10/…
https://www.schneier.com/blog/archives/2011/01/…

Extreme computer security when traveling to China.
http://www.nytimes.com/2012/02/11/technology/…
Mention of cryptography in a rap song:
https://www.schneier.com/blog/archives/2012/02/…

A U.S. federal court ruled it is unconstitutional for the police to force someone to decrypt his laptop computer:
https://www.schneier.com/blog/archives/2012/02/…

Good essay on the dangers of cyberwar rhetoric—and the cyberwar arms race.
http://m.wired.com/threatlevel/2012/02/…

An FBI special agent and counterterrorism expert criticizes the TSA:
http://gmancasefile.blogspot.com/2012/01/tsa-fail.html

A clever hack to detect which social networking sites website visitors are logged into.
http://www.tomanthony.co.uk//…

The ACLU filed a FOIA request for a bunch of cables that WikiLeaks had already released complete versions of. What they received is redacted versions of those cables. This gives us a window into what the government decides to withhold.
http://www.aclu.org/…
Commentary: “The Freedom of Information Act provides exceptions for a number of classes of information, but the State Department’s declassification decisions appear to be based not on the criteria specified in the statute, but rather on whether the documents embarrass the US or portray the US in a negative light.”
http://bltnotjustasandwich.com/2012/03/01/…

GPS spoofers: a great movie-plot threat.
https://www.schneier.com/blog/archives/2012/03/…

British anti-theft briefcase from the 1960s.
http://twentytwowords.com/2012/02/15/…
Comic: movie hacking vs. real hacking:
http://www.smbc-comics.com/index.php?…

According to this document, received by EPIC under the Freedom of Information Act, the U.S. Department of Homeland Security is combing through the gazillions of social media postings looking for dissent.
http://epic.org/2012/02/…
A partial list of keywords is included in the document (pages 20-23).
http://epic.org/foia/epic-v-dhs-media-monitoring/…
It’s reprinted in this blog post.
http://animalnewyork.com/2012/02/…
The NSA has released its specification for a secure Android. One of the interesting things it’s requiring is that all data be tunneled through a secure VPN.
http://www.nsa.gov/ia/_files/…
The more I look at mobile security, the more I think a secure tunnel is essential.

This essay uses the interesting metaphor of the man-in-the-middle attacker to describe cloud providers like Facebook and Google. Basically, they get in the middle of our interactions with others and eavesdrop on the data going back and forth.
http://www.itworld.com/it-managementstrategy/247344/…
Jamming speech with recorded speech.
http://www.technologyreview.com/blog/arxiv/27620/

Interesting research on the security of passphrases.
http://www.lightbluetouchpaper.org/2012/03/07/…
http://www.cl.cam.ac.uk/~jcb82/doc/…
Lots of good writings on cyber-war by Thomas Rid.
http://www.foreignpolicy.com/articles/2012/02/27/…
http://www.tandfonline.com/doi/pdf/10.1080/…
http://www.theregister.co.uk/2012/02/24/cyber_weapons/
http://www.tandfonline.com/doi/pdf/10.1080/…


Themes from the RSA Conference

Last month was the big RSA Conference in San Francisco: something like 20,000 people. From what I saw, these were the major themes on the show floor:

1. Companies that deal with “Advanced Persistent Threat.”

2. Companies that help you recover *after* you’ve been hacked.

3. Companies that deal with “Bring Your Own Device” at work, also known as consumerization.

Lots of commentary from other RSA attendees on my blog.
https://www.schneier.com/blog/archives/2012/03/…

Me on APT.
https://www.schneier.com/blog/archives/2011/11/…

Me on consumerization:
http://www.schneier.com/essay-323.html


Schneier News

There have been a lot of news stories, mostly related to the talks I gave at the RSA Conference.

Articles:
http://arstechnica.com/business/news/2012/02/…
http://www.networkworld.com/news/2012/…
http://www.infosecurity-magazine.com/view/24200/…
http://www.crn.com/news/security/232601720/…
http://www.theregister.co.uk/2012/02/29/…
http://www.infosecurity-magazine.com/view/24234/…
http://www.infosecurity-magazine.com/view/24236/…
Podcasts:
http://365.rsaconference.com/community/connect//…
I’ll be speaking at Dominican University in a Chicago suburb on March 27.
http://dushare.dom.edu/CampusNews/SitePages/…


How Changing Technology Affects Security

Security is a tradeoff, a balancing act between attacker and defender. Unfortunately, that balance is never static. Changes in technology affect both sides. Society uses new technologies to decrease what I call the *scope of defection*—what attackers can get away with—and attackers use new technologies to increase it. What’s interesting is the difference between how the two groups incorporate new technologies.

Changes in security systems can be slow. Society has to implement any new security technology as a group, which implies agreement and coordination and—in some instances—a lengthy bureaucratic procurement process. Meanwhile, an attacker can just use the new technology. For example, at the end of the horse-and-buggy era, it was easier for a bank robber to use his new motorcar as a getaway vehicle than it was for a town’s police department to decide it needed a police car, get the budget to buy one, choose which one to buy, buy it, and then develop training and policies for it. And if only one police department did this, the bank robber could just move to another town. Defectors are more agile and adaptable, making them much better at being early adopters of new technology.

We saw it in law enforcement’s initial inability to deal with Internet crime. Criminals were simply more flexible. Traditional criminal organizations like the Mafia didn’t immediately move onto the Internet; instead, new Internet-savvy criminals sprung up. They set up websites like CardersMarket and DarkMarket, and established new organized crime groups within a decade or so of the Internet’s commercialization. Meanwhile, law enforcement simply didn’t have the organizational fluidity to adapt as quickly. Cities couldn’t fire their old-school detectives and replace them with people who understood the Internet. The detectives’ natural inertia and tendency to sweep problems under the rug slowed things even more. They spent the better part of a decade playing catch-up.

There’s one more problem: defenders are in what military strategist Carl von Clausewitz calls “the position of the interior.” They have to defend against every possible attack, while the defector only has to find one flaw that allows one way through the defenses. As systems get more complicated due to technology, more attacks become possible. This means defectors have a first-mover advantage; they get to try the new attack first. Consequently, society is constantly responding: shoe scanners in response to the shoe bomber, harder-to-counterfeit money in response to better counterfeiting technologies, better antivirus software to combat new computer viruses, and so on. The attacker’s clear advantage increases the scope of defection even further.

Of course, there are exceptions. There are technologies that immediately benefit the defender and are of no use at all to the attacker—for example, fingerprint technology allowed police to identify suspects after they left the crime scene and didn’t provide any corresponding benefit to criminals. The same thing happened with immobilizing technology for cars, alarm systems for houses, and computer authentication technologies. Some technologies benefit both but still give more advantage to the defenders. The radio allowed street policemen to communicate remotely, which increased our level of safety more than the corresponding downside of criminals communicating remotely endangers us.

Still, we tend to be reactive in security, and only implement new measures in response to an increased scope of defection. We’re slow about doing it and even slower about getting it right.

This essay originally appeared in IEEE Security & Privacy. It was adapted from Chapter 16 of “Liars and Outliers.”
http://www.schneier.com/essay-392.html


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2012 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.