August 15, 2010
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1008.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
Lately I've been reading about user security and privacy -- control, really -- on social networking sites. The issues are hard and the solutions harder, but I'm seeing a lot of confusion in even forming the questions. Social networking sites deal with several different types of user data, and it's essential to separate them.
Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again -- revised -- at an OECD workshop on the role of Internet intermediaries in June.
1. Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
2. Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
3. Entrusted data is what you post on other people's pages. It's basically the same stuff as disclosed data, but the difference is that you don't have control over the data once you post it -- another user does.
4. Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it's basically the same stuff as disclosed data, but the difference is that you don't have control over it, and you didn't create it in the first place.
5. Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
6. Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you're likely gay yourself.
There are other ways to look at user data. Some of it you give to the social networking site in confidence, expecting the site to safeguard the data. Some of it you publish openly and others use it to find you. And some of it you share only within an enumerated circle of other users. At the receiving end, social networking sites can monetize all of it: generally by selling targeted advertising.
Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted -- I know one site that allows entrusted data to be edited or deleted within a 24-hour period -- and some cannot. Some can be viewed and some cannot.
It's also clear that users should have different rights with respect to each data type. We should be allowed to export, change, and delete disclosed data, even if the social networking sites don't want us to. It's less clear what rights we have for entrusted data -- and far less clear for incidental data. If you post pictures from a party with me in them, can I demand you remove those pictures -- or at least blur out my face? (Go look up the conviction of three Google executives in Italian court over a YouTube video.) And what about behavioral data? It's frequently a critical part of a social networking site's business model. We often don't mind if a site uses it to target advertisements, but are less sanguine when it sells data to third parties.
As we continue our conversations about what sorts of fundamental rights people have with respect to their data, and more countries contemplate regulation on social networking sites and user data, it will be important to keep this taxonomy in mind. The sorts of things that would be suitable for one type of data might be completely unworkable and inappropriate for another.
This essay previously appeared in IEEE Security & Privacy.
The NSA's Perfect Citizen: In what creepy back room do they come up with these names?
Someone claims to have reverse-engineered Skype's proprietary encryption protocols, and has published pieces of it. If the crypto is good, this is less of a big deal than you might think. Good cryptography is designed to be made public; it's only for business reasons that it remains secret.
From the U.S. Government Accountability Office: "Cybersecurity: Key Challenges Need to Be Addressed to Improve Research and Development." Thirty-six pages; I haven't read it.
Two interesting research papers on website password policies.
Interesting journal article evaluating the EU's counterterrorism efforts.
A book on GCHQ, and three reviews.
More research on the effectiveness of terrorist profiling:
Stuxnet is a new Internet worm that specifically targets Siemens WinCC SCADA systems: used to control production at industrial plants such as oil rigs, refineries, electronics production, and so on. The worm seems to upload plant info (schematics and production information) to an external website. Moreover, owners of these SCADA systems cannot change the default password because it would cause the software to break down.
The Washington Post has published a phenomenal piece of investigative journalism: a long, detailed, and very interesting expose on the U.S. intelligence industry. It's a truly excellent piece of investigative journalism. Pity people don't care much about investigative journalism -- or facts in politics, really -- anymore.
An article from The Economist makes a point that I have been thinking about for a while: modern technology makes life harder for spies, not easier. It used to be technology favored spycraft -- think James Bond gadgets -- but more and more, technology favors spycatchers. The ubiquitous collection of personal data makes it harder to maintain a false identity, ubiquitous eavesdropping makes it harder to communicate securely, the prevalence of cameras makes it harder to not be seen, and so on. I think this is an example of the general tendency of modern information and communications technology to increase power in proportion to existing power. So while technology makes the lone spy more effective, it makes an institutional counterspy organization much more powerful.
Here's a book from 1921 on how to profile people.
In related news, there might be a man-in-the-middle attack possible against the WPA2 protocol. Man-in-the-middle attacks are potentially serious, but it depends on the details -- and they're not available yet.
DNSSEC root key split among seven people:
Security vulnerabilities of smart electricity meters.
Hacking ATMs to spit out money, demonstrated at the Black Hat conference:
Seems there are a lot of smartphone apps that eavesdrop on their users. They do it for marketing purposes. Really, they seem to do it because the code base they use does it automatically or just because they can. (Initial reports that an Android wallpaper app was malicious seems to have been an overstatement; they're just incompetent: inadvertently collecting more data than necessary.)
Meanwhile, there's now an Android rootkit available.
More brain scanning to detect future terrorists:
Coffee cup disguised as a camera lens; yet another way to smuggle liquids onto aircraft.
There's a new paper circulating that claims to prove that P != NP. The paper has not been refereed, and I haven't seen any independent verifications or refutations. Despite the fact that the paper is by a respected researcher -- HP Lab's Vinay Deolalikar -- and not a crank, my bet is that the proof is flawed.
Facebook Privacy Settings: Who Cares?" by danah boyd and Eszter Hargittai.
Security analysis of smudges on smart phone touch screens.
Cloning retail gift cards.
WikiLeaks has posted an encrypted 1.4 GB file called "insurance." It's either 1.4 GB of embarrassing secret documents, or 1.4 Gig of random data bluffing. There's no way to know.
If WikiLeaks wanted to prove that their "insurance" was the real thing, they should have done this:
* Encrypt each document with a separate AES key.
* Ask someone to publicly tell them to choose a random document.
* Publish the decryption key for that document only.
That would be convincing.
In any case, some of the details might be wrong. The file might not be encrypted with AES256. It might be Blowfish. It might be OpenSSL. It might be something else.
Weird Iranian paranoia:
Most people might not be aware of it, but there's a National Cryptologic Museum at Ft. Meade, at NSA Headquarters. It's hard to know its exact relationship with the NSA. Is it part of the NSA, or is it a separate organization? Can the NSA reclassify things in its archives? David Kahn has given his papers to the museum; is that a good idea?
A "Memorandum of Understanding (MOU) between The National Security Agency (NSA) and the National Cryptologic Museum Foundation" was recently released. It's pretty boring, really, but it sheds some light on the relationshp between the museum and the agency.
None this month. Summers are always slow.
David Ropeik is a writer and consultant who specializes in risk perception and communication. His book, How Risky Is It, Really?: Why Our Fears Don't Always Match the Facts, is a solid introduction to the biology, psychology, and sociology of risk. If you're well-read on the topic already, you won't find much you didn't already know. But if this is a new topic for you, or if you want a well-organized guide to the current research on risk perception all in one place, this is pretty close to the perfect book.
Ropeik builds his model of human risk perception from the inside out. Chapter 1 is about fear, our largely subconscious reaction to risk. Chapter 2 discusses bounded rationality, the cognitive shortcuts that allow us to efficiently make risk trade-offs. Chapter 3 discusses some of the common cognitive biases we have that cause us to either overestimate or underestimate risk: trust, control, choice, natural vs. man-made, fairness, etc. -- 13 in all. Finally, Chapter 4 discusses the sociological aspects of risk perception: how our estimation of risk depends on that of the people around us.
The book is primarily about how we humans get risk wrong: how our perception of risk differs from the reality of risk. But Ropeik is careful not to use the word "wrong," and repeatedly warns us not to do it. Risk perception is not right or wrong, he says; it simply is. I don't agree with this. There is both a feeling and reality of risk and security, and when they differ, we make bad security trade-offs. If you think your risk of dying in a terrorist attack, or of your children being kidnapped, is higher than it really is, you're going to make bad security trade-offs. Yes, security theater has its place, but we should try to make that place as small as we can.
In Chapter 5, Ropeik tries his hand at solutions to this problem: "closing the perception gap" is how he puts it; reducing the difference between the feeling of security and the reality is how I like to explain it. This is his weakest chapter, but it's also a very hard problem. My writings along this line are similarly weak. Still, his ideas are worth reading and thinking about.
I don't have any other complaints with the book. Ropeik nicely balances readability with scientific rigor, his examples are interesting and illustrative, and he is comprehensive without being boring. Extensive footnotes allow the reader to explore the actual research behind the generalities. Even though I didn't learn much from reading it, I enjoyed the ride.
How Risky Is It, Really? is available in hardcover and for the Kindle. Presumably a paperback will come out in a year or so. Ropeik has a blog, although he doesn't update it much.
My essay on the feeling and reality of security:
My essay on the value of security theater:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2010 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.