Crypto-Gram

April 15, 2013

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1304.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


Our Internet Surveillance State

I’m going to start with three data points.

One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.

Two: Hector Monsegur, one of the leaders of the LulzSec hacker movement, was identified and arrested last year by the FBI. Although he practiced good computer security and used an anonymous relay service to protect his identity, he slipped up.

And three: Paula Broadwell, who had an affair with CIA director David Petraeus, similarly took extensive precautions to hide her identity. She never logged in to her anonymous e-mail service from her home network. Instead, she used hotel and other public networks when she e-mailed him. The FBI correlated hotel registration data from several different hotels—and hers was the common name.

The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we’re being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.

Increasingly, what we do on the Internet is being combined with other data about us. Unmasking Broadwell’s identity involved correlating her Internet activity with her hotel stays. Everything we do now involves computers, and computers produce data as a natural by-product. Everything is now being saved and correlated, and many big-data companies make money by building up intimate profiles of our lives from a variety of sources.

Facebook, for example, correlates your online behavior with your purchasing habits offline. And there’s more. There’s location data from your cell phone, there’s a record of your movements from closed-circuit TVs.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it’s efficient beyond the wildest dreams of George Orwell.

Sure, we can take measures to prevent this. We can limit what we search on Google from our iPhones, and instead use computer web browsers that allow us to delete cookies. We can use an alias on Facebook. We can turn our cell phones off and spend cash. But increasingly, none of it matters.

There are simply too many ways to be tracked. The Internet, e-mail, cell phones, web browsers, social networking sites, search engines: these have become necessities, and it’s fanciful to expect people to simply refuse to use them just because they don’t like the spying, especially since the full extent of such spying is deliberately hidden from us and there are few alternatives being marketed by companies that don’t spy.

This isn’t something the free market can fix. We consumers have no choice in the matter. All the major companies that provide us with Internet services are interested in tracking us. Visit a website and it will almost certainly know who you are; there are lots of ways to be tracked without cookies. Cell phone companies routinely undo the web’s privacy protection. One experiment at Carnegie Mellon took real-time videos of students on campus and was able to identify one-third of them by comparing their photos with publicly available tagged Facebook photos.

Maintaining privacy on the Internet is nearly impossible. If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, you’ve permanently attached your name to whatever anonymous service you’re using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can’t maintain his privacy on the Internet, we’ve got no hope.

In today’s world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect—occasionally demanding that they collect more and save it longer—to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they’re not going to give up their positions of power, despite what the people want.

Fixing this requires strong government will, but they’re just as punch-drunk on data as the corporations. Slap-on-the-wrist fines notwithstanding, no one is agitating for better privacy laws.

So, we’re done. Welcome to a world where Google knows exactly what sort of porn you all like, and more about your interests than your spouse does. Welcome to a world where your cell phone company knows exactly where you are all the time. Welcome to the end of private conversations, because increasingly your conversations are conducted by e-mail, text, or social networking sites.

And welcome to a world where all of this, and everything else that you do or is done on a computer, is saved, correlated, studied, passed around from company to company without your knowledge or consent; and where the government accesses it at will without a warrant.

Welcome to an Internet without privacy, and we’ve ended up here with hardly a fight.

This essay previously appeared on CNN.com, where it got 23,000 Facebook likes and 2,500 tweets—by far the most widely distributed essay I’ve ever written.
http://www.cnn.com/2013/03/16/opinion/…

How the Chinese hackers were identified:
http://security.blogs.cnn.com/2013/02/19/…
http://www.washingtonpost.com/s/worldviews/wp/…

How Sabu was identified:
http://www.cnn.com/2012/03/06/us/…
http://arstechnica.com/tech-policy/2012/03/…

How Paula Broadwell was identified:
http://www.cnn.com/2012/11/10/politics/…
http://www.aclu.org//…

Facebook tracking:
http://lifehacker.com/5843969/…
http://www.firstpost.com/tech/…

Collusion results:
http://www.theatlantic.com/technology/archive/2012/…

Data brokers building intimate profiles:
https://www.propublica.org/article/…

Correlating online behavior with offline purchasing habits:
http://adage.com/article/digital/…

Ubiquitous surveillance:
http://www.schneier.com/essay-109.html
http://www.washingtonpost.com/wp-dyn/content/…
http://www.propublica.org/article/…

Data harvesting from social networking sites:
http://www.propublica.org/article/…

Internet tracking:
http://queue.acm.org/detail.cfm?id=2390758
http://news.cnet.com/8301-1009_3-20005185-83.html
http://panopticlick.eff.org

Cell phone surveillance:
https://www.schneier.com/blog/archives/2013/01/…

Carnegie Mellon identification experiment:
http://www.heinz.cmu.edu/~acquisti/…

Google’s StreetView fine:
http://www.nytimes.com/2013/03/13/technology/…

The death of ephemeral conversation:
https://www.schneier.com/blog/archives/2008/11/…

National Security Letters:
http://epic.org/privacy/nsl/

The value of privacy:
http://www.schneier.com/essay-114.html

Commentary:
http://www.infoworld.com/t/cringely/…
http://.simplejustice.us/2013/03/17/…
http://telekommunisten.net/2013/03/27/…


Sixth Movie-Plot Threat Contest

It’s back, after a two-year hiatus. Terrorism is boring; cyberwar is in. Cyberwar, and its kin: cyber Pearl Harbor, cyber 9/11, cyber Armageddon. (Or make up your own: a cyber Black Plague, cyber Ragnarok, cyber comet-hits-the-earth.) This is how we get budget and power for militaries. This is how we convince people to give up their freedoms and liberties. This is how we sell-sell-sell computer security products and services. Cyberwar is hot, and it’s super scary. And now, you can help!

For this year’s contest, I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

Entries are limited to 500 words, and should be posted in the comments. In a month, I’ll choose some semifinalists, and we can all vote and pick the winner.

Good luck.

Submit your entry, and read others, on my blog:
https://www.schneier.com/blog/archives/2013/04/…

Terrorist cartoon:
http://wondermark.com/220/

Cyber Pearl Harbor:
http://www.theworld.org/2013/01/cyber-pearl-harbor/
http://tv.msnbc.com/2013/02/22/…
http://www.politico.com/story/2013/02/…

Cyber 9/11:
http://www.politico.com/story/2013/02/…
http://news.cnet.com/8301-1009_3-57556669-83/…

Cyber Armageddon:
http://www.intersecmag.co.uk/article.php?id=87
http://www.itbusinessedge.com/s/…

Movie-plot threat:
http://en.wikipedia.org/wiki/Movie_plot_threat

Older contest rules, semifinalists, and winners:
https://www.schneier.com/blog/archives/2006/04/…
https://www.schneier.com/blog/archives/2006/06/…
https://www.schneier.com/blog/archives/2007/04/…
https://www.schneier.com/blog/archives/2007/06/…
https://www.schneier.com/blog/archives/2007/06/…
https://www.schneier.com/blog/archives/2008/04/…
https://www.schneier.com/blog/archives/2008/05/…
https://www.schneier.com/blog/archives/2008/05/…
https://www.schneier.com/blog/archives/2009/04/…
https://www.schneier.com/blog/archives/2009/05/…
https://www.schneier.com/blog/archives/2010/04/…
https://www.schneier.com/blog/archives/2010/05/…
https://www.schneier.com/blog/archives/2010/06/…


IT for Oppression

Whether it’s Syria using Facebook to help identify and arrest dissidents or China using its “Great Firewall” to limit access to international news throughout the country, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, propaganda, and control. They’re getting really good at it, and the IT industry is helping. We’re helping by creating business applications—categories of applications, really—that are being repurposed by oppressive governments for their own use:

1. What is called censorship when practiced by a government is content filtering when practiced by an organization. Many companies want to keep their employees from viewing porn or updating their Facebook pages while at work. In the other direction, data loss prevention software keeps employees from sending proprietary corporate information outside the network and also serves as a censorship tool. Governments can use these products for their own ends.

2. Propaganda is really just another name for marketing. All sorts of companies offer social media-based marketing services designed to fool consumers into believing there is “buzz” around a product or brand. The only thing different in a government propaganda campaign is the content of the messages.

3. Surveillance is necessary for personalized marketing, the primary profit stream of the Internet. Companies have built massive Internet surveillance systems designed to track users’ behavior all over the Internet and closely monitor their habits. These systems track not only individuals but also relationships between individuals, to deduce their interests so as to advertise to them more effectively. It’s a totalitarian’s dream.

4. Control is how companies protect their business models by limiting what people can do with their computers. These same technologies can easily be co-opted by governments that want to ensure that only certain computer programs are run inside their countries or that their citizens never see particular news programs.

Technology magnifies power, and there’s no technical difference between a government and a corporation wielding it. This is how commercial security equipment from companies like BlueCoat and Sophos end up being used by the Syrian and other oppressive governments to surveil—in order to arrest—and censor their citizens. This is how the same face-recognition technology that Disney uses in its theme parks ends up identifying protesters in China and Occupy Wall Street protesters in New York.

There are no easy technical solutions, especially because these four applications—censorship, propaganda, surveillance, and control—are intertwined; it can be hard to affect one without also affecting the others. Anonymity helps prevent surveillance, but it also makes propaganda easier. Systems that block propaganda can facilitate censorship. And giving users the ability to run untrusted software on their computers makes it easier for governments—and criminals—to install spyware.

We need more research into how to circumvent these technologies, but it’s a hard sell to both the corporations and governments that rely on them. For example, law enforcement in the US wants drones that can identify and track people, even as we decry China’s use of the same technology. Indeed, the battleground is often economic and political rather than technical; sometimes circumvention research is itself illegal.

The social issues are large. Power is using the Internet to increase its power, and we haven’t yet figured out how to correct the imbalances among government, corporate, and individual interests in our digital world. Cyberspace is still waiting for its Gandhi, its Martin Luther King, and a convincing path from the present to a better future.

This essay previously appeared in IEEE Computers & Society.
http://www.schneier.com/essay-420.html


News

Audacious daytime prison escape by helicopter.
http://www.cnn.com/2013/03/18/world/americas/…
The escapees have since been recaptured.

Other prison escapes by helicopter.
https://en.wikipedia.org/wiki/…

Some far-out thoughts about computers from the CIA in 1962.
https://www.cia.gov/library/…
https://www.cia.gov/library/…

Nice summary article on the state-sponsored Gauss malware.
http://arstechnica.com/security/2013/03/…

Twenty five countries are using the FinSpy surveillance software package (also called FinFisher) to spy on their own citizens. It’s sold by the British company Gamma Group.
http://bits.blogs.nytimes.com/2013/03/13/…
http://en.wikipedia.org/wiki/FinFisher
http://citizenlab.org/2013/03/…
http://www.nytimes.com/2012/08/31/technology/…
http://bits.blogs.nytimes.com/2012/08/31/…
https://www.virustotal.com/en/file/…
http://www.virustotal.com/en/file/…
http://www.youtube.com/watch?…

Interesting lessons from the FBI’s insider threat program:
https://www.schneier.com/blog/archives/2013/03/…

The FBI wants cell phone carriers to store SMS messages for a long time, enabling them to conduct surveillance backwards in time. Nothing new there—data retention laws are being debated in many countries around the world—but I didn’t know how varied the current cellphone companies’ retention policies are.
https://www.schneier.com/blog/archives/2013/03/…

The FBI is secretly spying on cloud computer users. Both Google and Microsoft have admitted it. Presumably every other major cloud service provider is getting these National Security Letters as well.
http://www.wired.com/threatlevel/2013/03/…
http://www.wired.com/threatlevel/2013/03/…
If you’ve been following along, you know that a U.S. District Court recently ruled National Security Letters unconstitutional. Not that this changes anything yet.
http://www.slate.com/s/future_tense/2013/03/15/…

It’s pretty easy to identify people from their cell phone location data.
http://www.bbc.co.uk/news/science-environment-21923360
http://dx.doi.org/10.1038/srep01376
EFF maintains a good page on the issues surrounding location privacy.
https://www.eff.org/issues/location-privacy

The NSA has published declassified versions of its “Cryptolog” newsletter. All the issues from Aug 1974 through Summer 1997 are on the web, although there are some pretty heavy redactions in places.
http://www.nsa.gov/public_info/declass/cryptologs.shtml
Here’s a link to the documents on a non-government site, in case they disappear.
http://www.governmentattic.org/7docs/…
I haven’t even begun to go through these yet. If you find anything good, please post it in comments.

Two acrostic puzzles from a 1977 issue of “Cryptolog.”
http://www.popsci.com/technology/article/2013-03/…

This is a story about a physicist who got taken in by an imaginary Internet girlfriend and ended up being arrested in Argentina for drug smuggling. Readers of Crypto-Gram will see it coming, of course, but it’s a still a good read.
http://www.nytimes.com/2013/03/10/magazine/…
I don’t know whether the professor knew what he was doing—it’s pretty clear that the reporter believes he’s guilty. What’s more interesting to me is that there is a drug smuggling industry that relies on recruiting mules off the Internet by pretending to be romantically inclined pretty women. Could that possibly be a useful enough recruiting strategy?

Here’s a similar story from New Zealand, with the sexes swapped:
http://tvnz.co.nz/national-news/…

This is a really clever attack on the RC4 encryption algorithm as used in TLS.
https://www.schneier.com/blog/archives/2013/03/…

Interesting article, “The Dangers of Surveillance,” by Neil M. Richards, “Harvard Law Review,” 2013.
http://papers.ssrn.com/sol3/papers.cfm?…
Reply to the article:
http://www.harvardlawreview.org/symposium/…

How people talked about the secrecy surrounding the Manhattan project.
http://nuclearsecrecy.com//2013/03/29/…

Xkcd had a Skein collision competition. The contest is over; Carnegie Mellon University won, with 384 (out of 1024) mismatched bits.
http://xkcd.com/1193/
http://almamater.xkcd.com/
http://stackoverflow.com/questions/15769093/…
http://.picloud.com/2013/04/02/xkcd-hash-breaking/

Interesting article about the perception of hackers in popular culture, and how the government uses the general fear of them to push for more power.
http://www.theatlantic.com/technology/archive/12/07/…
Note that this was written last year, before any of the recent overzealous prosecutions.
http://en.wikipedia.org/wiki/Aaron_Swartz
http://www.huffingtonpost.com/2013/03/18/…

I hadn’t heard of the term “elite panic” before, but it’s an interesting one.
https://www.schneier.com/blog/archives/2013/04/…

Interesting article from the “New Yorker.”
http://www.newyorker.com/online/s/elements/2013/…
I’m often asked what I think about bitcoins. I haven’t analyzed the security, but what I have seen looks good. The real issues are economic and political, and I don’t have the expertise to have an opinion on that.

By the way, here are more links analyzing bitcoins.
http://market-ticker.org/akcs-www?post=219284
http://www.forbes.com/sites/timothylee/2011/07/14/…
http://ftalphaville.ft.com/2013/04/08/1452532/…
https://www.schneier.com/blog/archives/2012/10/…
http://krugman.blogs.nytimes.com/2011/09/07/…

Apple’s iMessage encryption might be pretty good—the DEA is complaining about it—but it might not be.
https://www.schneier.com/blog/archives/2013/04/…

A nice example of the security mindset from an eighth grader.
http://.tanyakhovanova.com/?p=277
I’ve written about the security mindset in the past, and this is a great example of it.
https://www.schneier.com/blog/archives/2008/03/…

The last cryptanalyst at the Battle of Midway, Rear Admiral Donald “Mac” Showers, USN-Ret, passed away 19 October 2012. His interment at Arlington National Cemetery at Arlington, Virginia, will be Monday, April 15, at 3:00. The family made this a public event to celebrate his life and contributions to the cryptologic community.
http://www.navy.mil/midway/showers.html
http://navintpro.net/?p=2949

Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited.
http://freedom-to-tinker.com//felten/…
I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.
https://www.schneier.com/blog/archives/2007/01/…

By the way, a lot of the hype surrounding this attack was media manipulation.
http://gizmodo.com/5992652/…

If the police can use cameras, so can the burglars.
http://www.wfaa.com/news/crime/…

There is a lot of buzz on the Internet about a talk at the Hack-in-the Box conference by Hugo Teso, who claims he can hack in to remotely control an airplane’s avionics. He even wrote an Android app to do it. Near as I can tell, he’s exploiting a vulnerability in the simulation system, not in actual aircraft.
http://edition.cnn.com/2013/04/11/tech/mobile/…
http://m.blogs.computerworld.com/…
http://www.businessweek.com/articles/2013-04-12/…
http://news.yahoo.com/…
http://rt.com/news/teso-plane-hijack-android-716/
http://www.techspot.com/news/…
http://tech.slashdot.org/story/13/04/10/2033253/…
http://slashdot.org/topic/cloud/…
http://www.informationweek.com/security/…
These are good refutations:
http://www.forbes.com/sites/andygreenberg/2013/04/…
http://www.askthepilot.com/hijacking-via-android/
http://www.pprune.org/tech-log/…

Google Glass enabled new forms of cheating.
https://www.schneier.com/blog/archives/2013/04/…


When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force—for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.

The problem is that it’s not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They’re more nimble and adaptable than defensive institutions like police forces. They’re not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side—it’s easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don’t think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious…and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

As the destructive power of individual actors and fringe groups increases, so do the calls for—and society’s acceptance of—increased security.

Traditional security largely works “after the fact.” We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they’re exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).

When that isn’t enough, we resort to “before-the-fact”; security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.

And in the global interconnected world we live in, they’re not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We’re already almost entirely living in a surveillance state, though we don’t realize it or won’t admit it to ourselves. This will only get worse as technology advances; today’s Ph.D. theses are tomorrow’s high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and “Minority Report”-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of *someone* in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

If security won’t work in the end, what is the solution?

Resilience—building systems able to survive unexpected and devastating attacks—is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city—witness New Orleans after Hurricane Katrina or even New York after Sandy—we need to start acting like it, and planning for it. Still, it’s hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don’t know how to adapt any defenses—including resilience—fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We’re going to have to figure this out if we want to survive, and I’m not sure how many decades we have left.

This essay originally appeared on Wired.com.
http://www.wired.com/opinion/2013/03/…

Security imbalances:
http://www.itbusinessedge.com/itdownloads/…

Cory Doctorow on broad technology prohibitions:
http://boingboing.net/2012/01/10/lockdown.html

Terrorism is not an existential threat:
http://www.foreignaffairs.com/articles/66186/…

New regimes of trust:
http://www.schneier.com/essay-410.html
http://www.schneier.com/essay-412.html

Commentary:
http://eclecticbreakfast.blogspot.com/2013/03/…


Security Awareness Training

Should companies spend money on security awareness training for their employees? It’s a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry’s focus on training serves to obscure greater failings in security design.

In order to understand my argument, it’s useful to look at training’s successes and failures. One area where it doesn’t work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: we just aren’t very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald’s Super Monster Meal sounds really good *right now*. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they’re a lot of bother right now and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy; no one reads through new privacy policies; it’s much easier to just click “OK” and start chatting with your friends. In short: security is never salient.

Another reason health training works poorly is that it’s hard to link behaviors with benefits. We can train anyone—even laboratory rats—with a simple reward mechanism: push the button, get a food pellet. But with health, the connection is more abstract. If you’re unhealthy, what caused it? It might have been something you did or didn’t do years ago, it might have been one of the dozen things you have been doing and not doing for months, or it might have been the genes you were born with. Computer security is a lot like this, too.

Training laypeople in pharmacology also isn’t very effective. We expect people to make all sorts of medical decisions at the drugstore, and they’re not very good at it. Turns out that it’s hard to teach expertise. We can’t expect every mother to have the knowledge of a doctor or pharmacist or RN, and we certainly can’t expect her to become an expert when most of the advice she’s exposed to comes from manufacturers’ advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.

One area of health that *is* a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health, and have dramatically changed their behavior. This is important: most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security. Maybe they’re right and maybe they’re wrong, but they’re how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, and pick a few simple metaphors of security and train people to make decisions using those metaphors.

On the other hand, we still have trouble teaching people to wash their hands—even though it’s easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it’s not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.

Another area where training works is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test, to be allowed to drive a car. One reason that works is because driving is a near-term, really cool, obtainable goal. Another reason is even though the technology of driving has changed dramatically over the past century, that complexity has been largely hidden behind a fairly static interface. You might have learned to drive thirty years ago, but that knowledge is still relevant today. On the other hand, password advice from ten years ago isn’t relevant today. Can I bank from my browser? Are PDFs safe? Are untrusted networks okay? Is JavaScript good or bad? Are my photos more secure in the cloud or on my own hard drive? The ‘interface’ we use to interact with computers and the Internet changes all the time, along with best practices for computer security. This makes training a lot harder.

Food safety is my final example. We have a bunch of simple rules—cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor—that are mostly right, but often ignored. If we can’t get people to follow these rules, what hope do we have for computer security training?

To those who think that training users in security is a good idea, I want to ask: “Have you ever met an actual user?” They’re not experts, and we can’t expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it’s hard for people to understand how to connect their behavior to eventual outcomes. So they turn to folk remedies that, while simple, don’t really address the threats.

Even if we could invent an effective computer security training program, there’s one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won’t make them more secure.

The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won’t let users choose lousy passwords and don’t care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That’s how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.

If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.

This essay originally appeared on DarkReading.com.
http://www.darkreading.com//240151108/…

Folk models of computer security:
http://prisms.cs.umass.edu/cs660sp11/papers/…

Changing password advice:
http://web.cheswick.com/ches/talks/rethink.pdf

Microsoft’s NEAT:
http://s.msdn.com/b/sdl/archive/2011/05/04/…

Security training for developers:
http://www.cigital.com/justice-league-blog/2013/01/…

Other essays on the topic, and commentary on this one.
http://www.csoonline.com/article/711412/…
http://searchsecurity.techtarget.com/news/…
http://www.darkreading.com//240151657/…
https://www.trustedsec.com/march-2013/…
http://ben0xa.com/security-awareness-education/
http://mobappsectriathlon.blogspot.com/2013/03/…
http://it.slashdot.org/story/13/03/20/015241/
http://www.darkreading.com/insider-threat/167801100/…
http://www.welivesecurity.com/2013/03/27/…
http://www.computerworld.com/s/article/9238058/…


Schneier News

I’m speaking at Thotcon in Chicago on April 26.
http://www.thotcon.org/

I’m speaking Dubai on May 6.
http://btglobalevents.com/Events/cybercrime_May2013

Two video interviews of me:
http://searchsecurity.techtarget.com/video/…
https://www.brighttalk.com/webcast/288/72057


What I’ve Been Thinking About

I’m starting to think about my next book, which will be about power and the Internet—from the perspective of security. My objective will be to describe current trends, explain where those trends are leading us, and discuss alternatives for avoiding that outcome. Many of my recent essays have touched on various facets of this, although I’m still looking for synthesis. These facets include:

1. The relationship between the Internet and power: how the Internet affects power, and how power affects the Internet. Increasingly, those in power are using information technology to increase their power.
http://www.schneier.com/essay-409.html

2. A feudal model of security that leaves users with little control over their data or computing platforms, forcing them to trust the companies that sell the hardware, software, and systems—and allowing those companies to abuse that trust.
http://www.schneier.com/essay-406.html

3. The rise of nationalism on the Internet and a cyberwar arms race, both of which play on our fears and which are resulting in increased military involvement in our information infrastructure.
http://www.schneier.com/essay-416.html
http://www.schneier.com/essay-411.html

4. Ubiquitous surveillance for both government and corporate purposes—aided by cloud computing, social networking, and Internet-enabled everything—resulting in a world without any real privacy.
http://www.schneier.com/essay-418.html

5. The four tools of Internet oppression—surveillance, censorship, propaganda, and use control—have both government and corporate uses. And these are interrelated; often building tools to fight one as the side effect of facilitating another.
http://www.schneier.com/essay-420.html

6. Ill-conceived laws and regulations on behalf of either government or corporate power, either to prop up their business models (copyright protections), fight crime (increased police access to data), or control our actions in cyberspace.

7. The need for leaks: both whistleblowers and FOIA suits. So much of what the government does to us is shrouded in secrecy, and leaks are the only we know what’s going on. This also applies to the corporate algorithms and systems and control much of our lives.
https://www.schneier.com/blog/archives/2013/03/…

On the one hand, we need new regimes of trust in the information age. (I wrote about the extensively in my most recent book, “Liars and Outliers.”)
http://www.schneier.com/essay-410.html
http://www.schneier.com/essay-412.html
On the other hand, the risks associated with increasing technology might mean that the fear of catastrophic attack will make us unable to create those new regimes.
http://www.schneier.com/essay-417.html

I believe society is headed down a dangerous path, and that we—as members of society—need to make some hard choices about what sort of world we want to live in. If we maintain our current trajectory, the future does not look good. It’s not clear if we have the social or political will to address the intertwined issues of power, security, and technology, or even have the conversations necessary to understand the decisions we need to make. Writing about topics like this is what I do best, and I hope that a book on this topic will have a positive effect on the discourse.

The working title of the book is “Power.com”—although that might be too similar to the book “Power, Inc.” for the final title.

These thoughts are still in draft, and not yet part of a coherent whole. For me, the writing process is how I understand a topic, and the shape of this book will almost certainly change substantially as I write. I’m very interested in what people think about this, especially in terms of solutions. Please pass this around to interested people, and leave comments to this blog post.


Changes to My Blog

I have made a few changes to my Schneier on Security blog that I’d like to talk about.

The first is the various buttons associated with each post: a Facebook Like button, a Retweet button, and so on. These buttons are ubiquitous on the Internet now. We publishers like them because it makes it easier for our readers to share our content. I especially like them because I can <strikeout>obsessively watch the totals</strikeout> see how my writings are spreading out across the Internet.

The problem is that these buttons use images, scripts, and/or iframes hosted on the social media site’s own servers. This is partly for webmasters’ convenience; it makes adoption as easy as copy-and-pasting a few lines of code. But it also gives Facebook, Twitter, Google, and so on a way to track you—even if you don’t click on the button. Remember that: if you see sharing buttons on a webpage, that page is almost certainly being tracked by social media sites or a service like AddThis. Or both.

What I’m using instead is SocialSharePrivacy, which was created by the German website Heise Online and adapted by Mathias Panzenböck. The page shows a grayed-out mockup of a sharing button. You click once to activate it, then a second time to share the page. If you don’t click, nothing is loaded from the social media site, so it can’t track your visit. If you don’t care about the privacy issues, you can click on the Settings icon and enable the sharing buttons permanently.

It’s not a perfect solution—two clicks instead of one—but it’s much more privacy-friendly.

(If you’re thinking of doing something similar on your own site, another option to consider is shareNice. ShareNice can be copied to your own webserver; but if you prefer, you can use their hosted version, which makes it as easy to install as AddThis. The difference is that shareNice doesn’t set cookies or even log IP addresses—though you’ll have to trust them on the logging part. The problem is that it can’t display the aggregate totals.)

The second change is the search function. I changed the site’s search engine from Google to DuckDuckGo, which doesn’t even store IP addresses. Again, you have to trust them on that, but I’m inclined to.

The third change is to the feed. Starting now, if you click the feed icon in the right-hand column of my blog, you’ll be subscribing to a feed that’s hosted locally on schneier.com, instead of one produced by Google’s Feedburner service. Again, this reduces the amount of data Google collects about you. Over the next couple of days, I will transition existing subscribers off of Feedburner, but since some of you are subscribed directly to a Feedburner URL, I recommend resubscribing to the new link to be sure. And if by chance you have trouble with the new feed, a legacy link will always point to the Feedburner version.

Fighting against the massive amount of surveillance data collected about us as we surf the Internet is hard, and possibly even fruitless. But I think it’s important to try.

My blog:
http://www.schneier.com/

AddThis:
http://www.addthis.com/

SocialSharePrivacy:
https://github.com/panzi/SocialSharePrivacy

Heise Online:
http://heise.de/

shareNice:
https://sharenice.org/

DuckDuckGo:
https://duckduckgo.com/privacy

RSS new link:
http://www.schneier.com/feed/atom

RSS legacy link:
http://feeds.feedburner.com/schneier/fulltext


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Liars and Outliers,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT, and is on the Advisory Boards of the Electronic Privacy Information Center (EPIC) and the Electronic Frontier Foundation (EFF). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2013 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.