January 15, 2013

by Bruce Schneier
Chief Security Technology Officer, BT

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively comment section. An RSS feed is available.

In this issue:

Last Month's Overreactions

Schools went into lockdown over a thermometer, a car backfiring, a bank robbery a few blocks away, a student alone in a gym, a neighbor on the street, and some vague unfounded rumors. And one high-school kid was arrested for drawing pictures of guns. Everywhere else, post-traumatic stupidity syndrome. (It's not a new phrase -- Google shows hits back to 2001 -- but it's new to me.) I think of it as: "Something must be done. This is something. Therefore, we must do it."

I'm not going to write about the Newtown school massacre. I wrote an article earlier this year after the Aurora shooting, which was a rewrite of one about the 2007 Virginia Tech shootings. I feel as if I'm endlessly repeating myself. Another essay, also from 2007, on the anti-terrorism "War on the Unexpected," is also relevant. Just remember, we're the safest we've been in 40 years.

Post-traumatic stupidity syndrome:

Me on the Aurora shootings:

Me on rare risks and overreaction:

Me on the war on the unexpected:

We're the safest we've been in 40 years:

Public Shaming as a Security Measure

In "Liars and Outliers," I talk a lot about the more social forms of security. One of them is reputational. The link below is to a blog post about that squishy sociological security measure: public shaming as a way to punish bigotry (and, by extension, to reduce the incidence of bigotry).

It's a pretty rambling post, first listing some of the public shaming sites, then trying to figure out whether they're a good idea or not, and finally coming to the conclusion that shaming doesn't do very much good and -- in many cases -- unjustly rewards the shamer.

I disagree with a lot of this. I do agree with:

I do think that shame has a role in the way we control our social norms. Shame is a powerful tool, and it's something that we use to keep our own actions in check all the time. The source of that shame varies immensely. Maybe we are shamed before God, or our parents, or our boss.

But I disagree with the author's insistence that "shame, ultimately, has to come from ourselves. We cannot be forced to feel shame." While technically it's true, operationally it's not. Shame comes from others' reactions to our actions. Yes, we feel it inside -- but it originates from out lifelong inculcation into the norms of our social group. And throughout the history of our species, social groups have used shame to effectively punish those who violate social norms. No one wants a bad reputation.

It's also true that we all have defenses against shame. One of them is to have an alternate social group for whom the shameful behavior is not shameful at all. Another is to simply not care what the group thinks. But none of this makes shame a less valuable tool of societal pressure.

Like all forms of security that society uses to control its members, shame is both useful and valuable. And I'm sure it is effective against bigotry. It might not be obvious how to deploy it effectively in the international and sometimes anonymous world of the Internet, but that's another discussion entirely.

"The Future of Reputation"


More World War II pigeon encrypted message news: A Canadian claimed that the message is based on a WWI codebook. A spokesman from GCHQ remains dubious, but says they'll be happy to look at the proposed solution.
The backstory:
Skepticism about the alleged deciphering:

There's a new exploit against Samsung Galaxy phones that allows a rogue app access to all memory. A hacker could copy all of your data, erase all of your data, and basically brick your phone. I haven't found an official Samsung response, but there is a quick fix.

Details on information-age law-enforcement techniques:

The "Great Firewall of China" is now able to detect and block encryption.
Some interesting blog comments from an American living and working in China:

Clever Amazon replacement-order scam:

An interesting firsthand phishing story using Twitter:

A Peruvian spider species create decoys to fool predators.

Industrial control system comes with a backdoor, protected by secrecy:

The newly announced ElcomSoft Forensic Disk Decryptor can decrypt BitLocker, PGP, and TrueCrypt. And it's only $300. How does it work?

Fascinating article about the economics of becoming a police informant in exchange for a lighter sentence.

Apollo Robbins, the world's best pickpocket.
Videos of him in action.

This is what Facebook gives the police in response to a subpoena. (Note that this isn't in response to a warrant; it's in response to a subpoena.) This might be the first one of these that has ever become public.
Commenters point out that this case is four years old, and that Facebook claims to have revised its policies since then.

Interesting details of an Amazon Marketplace scam. Worth reading.

This "Wall Street Journal" investigative piece is a month old, but well worth reading. Basically, the Total Information Awareness program is back with a different name.
Note that this is government data only, not commercial data. So while it includes "almost any government database, from financial forms submitted by people seeking federally backed mortgages to the health records of people who sought treatment at Veterans Administration hospitals" as well lots of commercial data, it's data the corporations have already given to the government. It doesn't include, for example, your detailed cell phone bills or your tweets.

Not a cat burglar, a cat smuggler.

It's a new DOS attack against a Facebook account: just claim the person is dead. All you need to do is fake an online obituary.

The politics and philosophy of national security: this essay explains why we're all living in failed Hobbesian states:

Terms of Service as a Security Threat

After the Instagram debacle, where it changed its terms of service to give itself greater rights over user photos and reversed itself after a user backlash, it's worth thinking about the security threat stemming from terms of service in general.

As cloud computing becomes the norm, as Internet security becomes more feudal, these terms of service agreements define what our service providers can do, both with the data we post and with the information they gather about how we use their service. The agreements are very one-sided -- most of the time, we're not even paying customers of these providers -- and can change without warning. And, of course, none of us ever read them.

Here's one example. Prezi is a really cool presentation system. While you can run presentations locally, it's basically cloud-based. Earlier this year, I was at a CISO Summit in Prague, and one of the roundtable discussions centered around services like Prezi. CISOs were worried that sensitive company information was leaking out of the company and being stored insecurely in the cloud. My guess is that they would have been much more worried if they read Prezi's terms of use:

With respect to Public User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content on and in connection with the manufacture, sale, promotion, marketing and distribution of products sold on, or in association with, the Service, or for purposes of providing you with the Service and promoting the same, in any medium and by any means currently existing or yet to be devised.
With respect to Private User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content solely for purposes of providing you with the Service.

Those paragraphs sure sound like Prezi can do anything it wants, including start a competing business, with any presentation I post to its site. (Note that Prezi's human readable -- but not legally correct -- terms of use document makes no mention of this.) Yes, I know Prezi doesn't *currently intend* to do that, but things change, companies fail, assets get bought, and what matters in the end is what the agreement says.

I don't mean to pick on Prezi; it's just an example. How many other of these Trojan horses are hiding in commonly used cloud provider agreements: both from providers that companies decide to use as a matter of policy, and providers that company employees use in violation of policy, for reasons of convenience?

Prezi terms of use:

Prezi human readable -- but not legally correct -- terms of use:

Instagram debacle:

Feudal security:

Classifying a Shape

This is a great essay:

Spheres are *special* shapes for nuclear weapons designers. Most nuclear weapons have, somewhere in them, that spheres-within-spheres arrangement of the implosion nuclear weapon design. You don't have to use spheres -- cylinders can be made to work, and there are lots of rumblings and rumors about non-spherical implosion designs around these here Internets -- but spheres are pretty common.
Imagine the scenario: you're a security officer working at Los Alamos. You know that spheres are weapon parts. You walk into a technical area, and you see spheres all around! Is that an ashtray, or it is a model of a plutonium pit? Anxiety mounts -- does the ashtray go into a safe at the end of the day, or does it stay out on the desk? (Has someone been tapping their cigarettes out into the pit model?)
All of this anxiety can be gone -- gone! -- by simply banning all non-nuclear spheres! That way you can effectively treat *all spheres* as sensitive shapes.
What I love about this little policy proposal is that it illuminates something deep about how secrecy works. Once you decide that something is so dangerous that the entire world hinges on keeping it under control, this sense of fear and dread starts to creep outwards. The worry about what must be controlled becomes insatiable -- and pretty soon the mundane is included with the existential.

The essay continues with a story of a scientist who received a security violation for leaving an orange on his desk.

Two points here. One, this is a classic problem with any detection system. When it's hard to build a system that detects the thing you're looking for, you change the problem to detect something easier -- and hope the overlap is enough to make the system work. Think about airport security. It's too hard to detect actual terrorists with terrorist weapons, so instead they detect pointy objects. Internet filtering systems work the same way, too. (Remember when URL filters blocked the word "sex," and the Middlesex Public Library found that it couldn't get to its municipal webpages?)

Two, the Los Alamos system only works because false negatives are much, much worse than false positives. It really is worth classifying an abstract shape and annoying an officeful of scientists and others to protect the nuclear secrets. Airport security fails because the false-positive/false-negative cost ratio is different.

Schneier News

I'm speaking at Etsy in Brooklyn, NY, on January 16.

Finally, "Cryptography Engineering" is available as a DRM-free ebook..

I Seem to Be a Verb Now

From "The Insider's TSA Dictionary":

*Bruce Schneiered*: (V, ints) When a passenger uses logic in order to confound and perplex an officer into submission. Ex: "A TSA officer took my Swiss army knife, but let my scissors go. I then asked him wouldn't it be more dangerous if I were to make my scissors into two blades, or to go into the bathroom on the secure side and sharpen my grandmother's walking stick with one of the scissor blades into a terror spear. Then after I pointed out that all of our bodies contain a lot more than 3.4 ounces of liquids, the TSA guy got all pissed and asked me if I wanted to fly today. I totally *Schneirered* [sic] his ass."

Supposedly the site is by a former TSA employee. I have no idea if that's true.

Experimental Results: Liars and Outliers Trust Offer

Last August, I had a two-day offer on my blog to sell copies of Liars and Outliers for $11, in exchange for a book review. This was much less than the $30 list price; less even than the $16 Amazon price. For readers outside the U.S., where books can be very expensive, it was a great price.

I sold 800 books from this offer -- much more than the few hundred I had originally intended -- to people all over the world. It was the end of September before I mailed them all out, and probably a couple of weeks later before everyone received their copy. I sent an e-mail to all the recipients warning them that the books would be somewhat delayed, and in that e-mail, I asked people to send me a link to the review, wherever it was posted. While there was no deadline, at this point, three months after all copies should have been received, it's interesting to count up the number of reviews I received from the offer.

That's not a trivial task. While I asked people to e-mail me URLs, not everyone did. But counting the independent reviews, the Amazon reviews, and the Goodreads reviews from the time period, and making some reasonable assumptions, to date about 70 people have fulfilled their end of the bargain and reviewed my book.

That's 9%.

There were some outliers. One person wrote to tell me that he didn't like the book, and offered not to publish a review despite the agreement. Another two e-mailed me to offer to return the price difference because they hadn't had time to read or review (I declined).

Reading comments from the blog post, a great many people have been busier than I (and sometimes they) have expected, and just haven't had the time to read the book and write a review. I know my reading is often delayed by more pressing priorities. And although I didn't put any deadline on when the review should be completed by, I received a surge of reviews around the end of the year -- probably because some people self-imposed that as a deadline. What is certain is that, for one reason or another, a great majority of people have not as yet upheld their end of the bargain.

The original offer was an exercise in trust. But to use the language of the book, the only thing inducing compliance was the morals of the reader. While in theory I could have collected everyone's names, checked off those who wrote reviews, and tried shaming the rest, that would be a lot of work, and make people feel bad without actually changing defectors into cooperators. Perhaps this public nudge will be enough to convince some more people to write reviews.

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Liars and Outliers," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2013 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.