Page 475

Friday Squid Blogging: Bomb Discovered in Squid at Market

Really:

An unexploded bomb was found inside a squid when the fish was slaughtered at a fish market in Guangdong province.

Oddly enough, this doesn’t seem to be the work of terrorists:

The stall owner, who has been selling fish for 10 years, told the newspaper the 1-meter-long squid might have mistaken the bomb for food.

Clearly there’s much to this story that remains unreported.

More news articles.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on March 29, 2013 at 4:19 PMView Comments

The Dangers of Surveillance

Interesting article, “The Dangers of Surveillance,” by Neil M. Richards, Harvard Law Review, 2013. From the abstract:

….We need a better account of the dangers of surveillance.

This article offers such an account. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” I explain what those harms are and why they matter. At the level of theory, I explain when surveillance is particularly dangerous, and when it is not. Surveillance is harmful because it can chill the exercise of our civil liberties, especially our intellectual privacy. It is also gives the watcher power over the watched, creating the the risk of a variety of other harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance.

At a practical level, I propose a set of four principles that should guide the future development of surveillance law, allowing for a more appropriate balance between the costs and benefits of government surveillance. First, we must recognize that surveillance transcends the public-private divide. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers. Second, we must recognize that secret surveillance is illegitimate, and prohibit the creation of any domestic surveillance programs whose existence is secret. Third, we should recognize that total surveillance is illegitimate and reject the idea that it is acceptable for the government to record all Internet activity without authorization. Fourth, we must recognize that surveillance is harmful. Surveillance menaces intellectual privacy and increases the risk of blackmail, coercion, and discrimination; accordingly, we must recognize surveillance as a harm in constitutional standing doctrine.

EDITED TO ADD (4/12): Reply to the article.

Posted on March 29, 2013 at 12:25 PMView Comments

New RC4 Attack

This is a really clever attack on the RC4 encryption algorithm as used in TLS.

We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.

The attack is very specialized:

The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext.

The number of sessions needed to reliably recover these plaintext bytes is around 230, but already with only 224 sessions, certain bytes can be recovered reliably.

Is this a big deal? Yes and no. The attack requires the identical plaintext to be repeatedly encrypted. Normally, this would make for an impractical attack in the real world, but http messages often have stylized headers that are identical across a conversation—for example, cookies. On the other hand, those are the only bits that can be decrypted. Currently, this attack is pretty raw and unoptimized—so it’s likely to become faster and better.

There’s no reason to panic here. But let’s start to move away from RC4 to something like AES.

There are lots of press articles on the attack.

Posted on March 29, 2013 at 6:59 AMView Comments

Unwitting Drug Smugglers

This is a story about a physicist who got taken in by an imaginary Internet girlfriend and ended up being arrested in Argentina for drug smuggling. Readers of this blog will see it coming, of course, but it’s a still a good read.

I don’t know whether the professor knew what he was doing—it’s pretty clear that the reporter believes he’s guilty. What’s more interesting to me is that there is a drug smuggling industry that relies on recruiting mules off the Internet by pretending to be romantically inclined pretty women. Could that possibly be a useful enough recruiting strategy?

EDITED TO ADD (4/12): Here’s a similar story from New Zealand, with the sexes swapped.

Posted on March 28, 2013 at 8:36 AMView Comments

Security Awareness Training

Should companies spend money on security awareness training for their employees? It’s a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry’s focus on training serves to obscure greater failings in security design.

In order to understand my argument, it’s useful to look at training’s successes and failures. One area where it doesn’t work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: we just aren’t very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald’s Super Monster Meal sounds really good right now. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they’re a lot of bother right now and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy; no one reads through new privacy policies; it’s much easier to just click "OK" and start chatting with your friends. In short: security is never salient.

Another reason health training works poorly is that it’s hard to link behaviors with benefits. We can train anyone—even laboratory rats—with a simple reward mechanism: push the button, get a food pellet. But with health, the connection is more abstract. If you’re unhealthy, what caused it? It might have been something you did or didn’t do years ago, it might have been one of the dozen things you have been doing and not doing for months, or it might have been the genes you were born with. Computer security is a lot like this, too.

Training laypeople in pharmacology also isn’t very effective. We expect people to make all sorts of medical decisions at the drugstore, and they’re not very good at it. Turns out that it’s hard to teach expertise. We can’t expect every mother to have the knowledge of a doctor or pharmacist or RN, and we certainly can’t expect her to become an expert when most of the advice she’s exposed to comes from manufacturers’ advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.

One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health, and have dramatically changed their behavior. This is important: most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security. Maybe they’re right and maybe they’re wrong, but they’re how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, and pick a few simple metaphors of security and train people to make decisions using those metaphors.

On the other hand, we still have trouble teaching people to wash their hands—even though it’s easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it’s not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.

Another area where training works is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test, to be allowed to drive a car. One reason that works is because driving is a near-term, really cool, obtainable goal. Another reason is even though the technology of driving has changed dramatically over the past century, that complexity has been largely hidden behind a fairly static interface. You might have learned to drive thirty years ago, but that knowledge is still relevant today. On the other hand, password advice from ten years ago isn’t relevant today. Can I bank from my browser? Are PDFs safe? Are untrusted networks okay? Is JavaScript good or bad? Are my photos more secure in the cloud or on my own hard drive? The ‘interface’ we use to interact with computers and the Internet changes all the time, along with best practices for computer security. This makes training a lot harder.

Food safety is my final example. We have a bunch of simple rules—cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor—that are mostly right, but often ignored. If we can’t get people to follow these rules, what hope do we have for computer security training?

To those who think that training users in security is a good idea, I want to ask: “Have you ever met an actual user?” They’re not experts, and we can’t expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it’s hard for people to understand how to connect their behavior to eventual outcomes. So they turn to folk remedies that, while simple, don’t really address the threats.

Even if we could invent an effective computer security training program, there’s one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won’t make them more secure.

The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won’t let users choose lousy passwords and don’t care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That’s how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.

If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.

This essay originally appeared on DarkReading.com.

There is lots of commentary on this one.

EDITED TO ADD (4/4): Another commentary.

EDITED TO ADD (4/8): more commentary.

EDITED TO ADD (4/23): Another opinion.

Posted on March 27, 2013 at 6:47 AMView Comments

The NSA's Cryptolog

The NSA has published declassified versions of its Cryptolog newsletter. All the issues from Aug 1974 through Summer 1997 are on the web, although there are some pretty heavy redactions in places. (Here’s a link to the documents on a non-government site, in case they disappear.)

I haven’t even begun to go through these yet. If you find anything good, please post it in comments.

Posted on March 26, 2013 at 2:15 PMView Comments

Identifying People from Mobile Phone Location Data

Turns out that it’s pretty easy:

Researchers at the Massachusetts Institute of Technology (MIT) and the Catholic University of Louvain studied 15 months’ worth of anonymised mobile phone records for 1.5 million individuals.

They found from the “mobility traces” – the evident paths of each mobile phone – that only four locations and times were enough to identify a particular user.

“In the 1930s, it was shown that you need 12 points to uniquely identify and characterise a fingerprint,” said the study’s lead author Yves-Alexandre de Montjoye of MIT.

“What we did here is the exact same thing but with mobility traces. The way we move and the behaviour is so unique that four points are enough to identify 95% of people,” he told BBC News.

Here’s the study.

EFF maintains a good page on the issues surrounding location privacy.

Posted on March 26, 2013 at 6:38 AMView Comments

Our Internet Surveillance State

I’m going to start with three data points.

One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.

Two: Hector Monsegur, one of the leaders of the LulzSec hacker movement, was identified and arrested last year by the FBI. Although he practiced good computer security and used an anonymous relay service to protect his identity, he slipped up.

And three: Paula Broadwell, who had an affair with CIA director David Petraeus, similarly took extensive precautions to hide her identity. She never logged in to her anonymous e-mail service from her home network. Instead, she used hotel and other public networks when she e-mailed him. The FBI correlated hotel registration data from several different hotels—and hers was the common name.

The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we’re being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.

Increasingly, what we do on the Internet is being combined with other data about us. Unmasking Broadwell’s identity involved correlating her Internet activity with her hotel stays. Everything we do now involves computers, and computers produce data as a natural by-product. Everything is now being saved and correlated, and many big-data companies make money by building up intimate profiles of our lives from a variety of sources.

Facebook, for example, correlates your online behavior with your purchasing habits offline. And there’s more. There’s location data from your cell phone, there’s a record of your movements from closed-circuit TVs.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it’s efficient beyond the wildest dreams of George Orwell.

Sure, we can take measures to prevent this. We can limit what we search on Google from our iPhones, and instead use computer web browsers that allow us to delete cookies. We can use an alias on Facebook. We can turn our cell phones off and spend cash. But increasingly, none of it matters.

There are simply too many ways to be tracked. The Internet, e-mail, cell phones, web browsers, social networking sites, search engines: these have become necessities, and it’s fanciful to expect people to simply refuse to use them just because they don’t like the spying, especially since the full extent of such spying is deliberately hidden from us and there are few alternatives being marketed by companies that don’t spy.

This isn’t something the free market can fix. We consumers have no choice in the matter. All the major companies that provide us with Internet services are interested in tracking us. Visit a website and it will almost certainly know who you are; there are lots of ways to be tracked without cookies. Cell phone companies routinely undo the web’s privacy protection. One experiment at Carnegie Mellon took real-time videos of students on campus and was able to identify one-third of them by comparing their photos with publicly available tagged Facebook photos.

Maintaining privacy on the Internet is nearly impossible. If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, you’ve permanently attached your name to whatever anonymous service you’re using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can’t maintain his privacy on the Internet, we’ve got no hope.

In today’s world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect—occasionally demanding that they collect more and save it longer—to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they’re not going to give up their positions of power, despite what the people want.

Fixing this requires strong government will, but they’re just as punch-drunk on data as the corporations. Slap-on-the-wrist fines notwithstanding, no one is agitating for better privacy laws.

So, we’re done. Welcome to a world where Google knows exactly what sort of porn you all like, and more about your interests than your spouse does. Welcome to a world where your cell phone company knows exactly where you are all the time. Welcome to the end of private conversations, because increasingly your conversations are conducted by e-mail, text, or social networking sites.

And welcome to a world where all of this, and everything else that you do or is done on a computer, is saved, correlated, studied, passed around from company to company without your knowledge or consent; and where the government accesses it at will without a warrant.

Welcome to an Internet without privacy, and we’ve ended up here with hardly a fight.

This essay previously appeared on CNN.com, where it got 23,000 Facebook likes and 2,500 tweets—by far the most widely distributed essay I’ve ever written.

Commentary.

EDITED TO ADD (3/26): More commentary.

EDITED TO ADD (3/28): This Communist commentary seems to be mostly semantic drivel, but parts of it are interesting. The author doesn’t seem to have a problem with State surveillance, but he thinks the incentives that cause businesses to use the same tools should be revisited. This seems just as wrong-headed as the Libertarians who have no problem with corporations using surveillance tools, but don’t want governments to use them.

EDITED TO ADD (5/28): This essay has been translated into Polish.

Posted on March 25, 2013 at 6:28 AMView Comments

Changes to the Blog

I have made a few changes to my blog that I’d like to talk about.

The first is the various buttons associated with each post: a Facebook Like button, a Retweet button, and so on. These buttons are ubiquitous on the Internet now. We publishers like them because it makes it easier for our readers to share our content. I especially like them because I can obsessively watch the totals see how my writings are spreading out across the Internet.

The problem is that these buttons use images, scripts, and/or iframes hosted on the social media site’s own servers. This is partly for webmasters’ convenience; it makes adoption as easy as copy-and-pasting a few lines of code. But it also gives Facebook, Twitter, Google, and so on a way to track you—even if you don’t click on the button. Remember that: if you see sharing buttons on a webpage, that page is almost certainly being tracked by social media sites or a service like AddThis. Or both.

What I’m using instead is SocialSharePrivacy, which was created by the German website Heise Online and adapted by Mathias Panzenböck. The page shows a grayed-out mockup of a sharing button. You click once to activate it, then a second time to share the page. If you don’t click, nothing is loaded from the social media site, so it can’t track your visit. If you don’t care about the privacy issues, you can click on the Settings icon and enable the sharing buttons permanently.

It’s not a perfect solution—two clicks instead of one—but it’s much more privacy-friendly.

(If you’re thinking of doing something similar on your own site, another option to consider is shareNice. ShareNice can be copied to your own webserver; but if you prefer, you can use their hosted version, which makes it as easy to install as AddThis. The difference is that shareNice doesn’t set cookies or even log IP addresses—though you’ll have to trust them on the logging part. The problem is that it can’t display the aggregate totals.)

The second change is the search function. I changed the site’s search engine from Google to DuckDuckGo, which doesn’t even store IP addresses. Again, you have to trust them on that, but I’m inclined to.

The third change is to the feed. Starting now, if you click the feed icon in the right-hand column of my blog, you’ll be subscribing to a feed that’s hosted locally on schneier.com, instead of one produced by Google’s Feedburner service. Again, this reduces the amount of data Google collects about you. Over the next couple of days, I will transition existing subscribers off of Feedburner, but since some of you are subscribed directly to a Feedburner URL, I recommend resubscribing to the new link to be sure. And if by chance you have trouble with the new feed, this legacy link will always point to the Feedburner version.

Fighting against the massive amount of surveillance data collected about us as we surf the Internet is hard, and possibly even fruitless. But I think it’s important to try.

Posted on March 22, 2013 at 3:46 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.