September 15, 2009
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0909.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
On September 30, 2001, I published a special issue of Crypto-Gram discussing the terrorist attacks. I wrote about the novelty of the attacks, airplane security, diagnosing intelligence failures, the potential of regulating cryptography -- because it could be used by the terrorists -- and protecting privacy and liberty. Much of what I wrote is still relevant today.
Me from 2006: "Refuse to be Terrorized."
Skein is one of the 14 SHA-3 candidates chosen by NIST to advance to the second round. As part of the process, NIST allowed the algorithm designers to implement small "tweaks" to their algorithms. We've tweaked the rotation constants of Skein.
The revised Skein paper contains the new rotation constants, as well as information about how we chose them and why we changed them, the results of some new cryptanalysis, plus new IVs and test vectors.
Tweaks were due today, September 15. Now the SHA-3 process moves into the second round. According to NIST's timeline, they'll choose a set of final round candidate algorithms in 2010, and then a single hash algorithm in 2012. Between now and then, it's up to all of us to evaluate the algorithms and let NIST know what we want. Cryptanalysis is important, of course, but so is performance.
The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Gr� Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, at the SHA-3 Zoo.
In other news, we're making Skein shirts available to the public. Those of you who attended the First Hash Function Candidate Conference in Leuven, Belgium, earlier this year might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. All orders must be received before 1 October, and then we'll have all the shirts made in one batch.
Revised Skein paper:
Revised Skein source code:
My 2008 essay on SHA-3:
Details on Skein shirts:
Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there's more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant. So a smart, risk-conscious organization will give each employee the exact level of access he needs to do his job, and no more.
Over the years, there's been a lot of work put into role-based access control. But despite the large number of academic papers and high-profile security products, most organizations don't implement it -- at all -- with the predictable security problems as a result.
Regularly we read stories of employees abusing their database access-control privileges for personal reasons: medical records, tax records, passport records, police records. NSA eavesdroppers spy on their wives and girlfriends. Departing employees take corporate secrets
A spectacular access control failure occurred in the UK in 2007. An employee of Her Majesty's Revenue & Customs had to send a couple of thousand sample records from a database on all children in the country to National Audit Office. But it was easier for him to copy the entire database of 25 million people onto a couple of disks and put it in the mail than it was to select out just the records needed. Unfortunately, the discs got lost in the mail and the story was a huge embarrassment for the government.
Eric Johnson at Dartmouth's Tuck School of Business has been studying the problem, and his results won't startle anyone who has thought about it at all. RBAC is very hard to implement correctly. Organizations generally don't even know who has what role. The employee doesn't know, the boss doesn't know -- and these days the employee might have more than one boss -- and senior management certainly doesn't know. There's a reason RBAC came out of the military; in that world, command structures are simple and well-defined.
Even worse, employees' roles change all the time -- Johnson chronicled one business group of 3,000 people that made 1,000 role changes in just three months -- and it's often not obvious what information an employee needs until he actually needs it. And information simply isn't that granular. Just as it's much easier to give someone access to an entire file cabinet than to only the particular files he needs, it's much easier to give someone access to an entire database than only the particular records he needs.
This means that organizations either over-entitle or under-entitle employees. But since getting the job done is more important than anything else, organizations tend to over-entitle. Johnson estimates that 50 percent to 90 percent of employees are over-entitled in large organizations. In the uncommon instance where an employee needs access to something he normally doesn't have, there's generally some process for him to get it. And access is almost never revoked once it's been granted. In large formal organizations, Johnson was able to predict how long an employee had worked there based on how much access he had.
Clearly, organizations can do better. Johnson's current work involves building access-control systems with easy self-escalation, audit to make sure that power isn't abused, violation penalties (Intel, for example, issues "speeding tickets" to violators), and compliance rewards. His goal is to implement incentives and controls that manage access without making people too risk-averse.
In the end, a perfect access control system just isn't possible; organizations are simply too chaotic for it to work. And any good system will allow a certain number of access control violations, if they're made in good faith by people just trying to do their jobs. The "speeding ticket" analogy is better than it looks: we post limits of 55 miles per hour, but generally don't start ticketing people unless they're going over 70.
This essay previously appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus's response here -- after you answer some nosy questions to get a free account.
Abuse of database access:
Flash has the equivalent of cookies, and they're hard to delete.
It's possible to fabricate DNA evidence:
The legal term "terroristic threats" is older than 9/11, but these days it evokes too much of an emotional response:
Interesting developments in lie detection:
From the humor website Cracked: "The 5 Most Embarrassing Failures in the History of Terrorism." Yes, it's funny. But remember that these are the terrorist masterminds that politicians invoke to keep us scared.
Modeling zombie outbreaks: the math doesn't look good. "When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection."
As part of their training, federal agents engage in mock exercises in public places. Sometimes, innocent civilians get involved. It's actual security theater.
The sorts of crimes we've been seeing perpetrated against individuals are starting to be perpetrated against small businesses. The problem will get much worse, and the security externalities means that the banks care much less.
Interesting video demonstrating how a policeman can manipulate the results of a Breathalyzer.
More security stories from the natural world: marine worms with glowing bombs:
Weird: "The U.S. Federal Bureau of Investigation is trying to figure out who is sending laptop computers to state governors across the U.S., including West Virginia Governor Joe Mahchin and Wyoming Governor Dave Freudenthal. Some state officials are worried that they may contain malicious software."
Good article on the exaggerated fears of cyberwar:
SIGABA and the history of one-time pads:
Interesting discussion of subpoenas as a security threat:
Nils Gilman's lecture on the global illicit economy Malware is one of Nils Gilman's examples, at about the nine-minute mark.
Perfectly legal (obtained with a FISA warrant) NSA intercepts used to convict liquid bombers.
File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn't care about whether the file could be recovered or not, and a file erase program -- I use BCWipe for Windows -- if you wanted to ensure no one could ever recover the file.
As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.
You have to trust that these companies will delete your data when you ask them to, but they're generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies' backup systems. Gmail explicitly says this in its privacy notice.
Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you're not in control of the computers that are storing the data.
This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people's Kindles entirely.
Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an e-mail, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one -- not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you -- will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won't be able to read it.
The details are complicated, but Vanish breaks the data's decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks -- machines constantly join and leave -- to make the data disappear. Unlike previous programs that supported file deletion, this one doesn't require you to trust any company, organization, or website. It just happens.
Of course, Vanish doesn't prevent the recipient of an e-mail or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle's deletion feature doesn't prevent people from copying a book's files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it's a good demonstration of how control affects file deletion. And while it's a step in the right direction, it's also new and therefore deserves further security analysis before being adopted on a wide scale.
We've lost the control of data on some of the computers we own, and we've lost control of our data in the cloud. We're not going to stop using Facebook and Twitter just because they're not going to delete our data when we ask them to, and we're not going to stop using Kindles and iPhones because they may delete our data when we don't want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.
Now we need something that will protect our data when a large corporation decides to delete it.
This essay originally appeared in The Guardian.
Me on cloud computing:
Control and the iPhone:
Social networking companies don't want to delete data:
How to delete your Facebook account:
A recent report has concluded that the London's surveillance cameras have solved one crime per thousand cameras per year.
I haven't seen the report, but I know it's hard to figure out when a crime has been "solved" by a surveillance camera. To me, the crime has to have been unsolvable without the cameras. Repeatedly I see pro-camera lobbyists pointing to the surveillance-camera images that identified the 7/7 London Transport bombers, but it is obvious that they would have been identified even without the cameras.
And it would really help my understanding of the report's per-crime cost-to-detect of £20,000 (I assume it is calculated from £200 million for the cameras times 1 in 1000 cameras used to solve a crime per year divided by ten years) if I knew what sorts of crimes the cameras "solved." If the £200 million solved 10,000 murders, it might very well be a good security trade-off. But my guess is that most of the crimes were of a much lower level.
Back in 2002, science fiction author Robert J. Sawyer wrote an essay about the trade-off between privacy and security. I've never forgotten the first sentence: "Whenever I visit a tourist attraction that has a guest register, I always sign it. After all, you never know when you'll need an alibi."
Since I read that, whenever I see a tourist attraction with a guest register, I do the same thing. I sign "Robert J. Sawyer, Toronto, ON" -- because you never know when he'll need an alibi.
I'm speaking at the University of Kentucky on September 17.
I'm also speaking at the II International Symposium on Network and Data Communications, in Lima, Peru on September 25.
And I'm speaking at GOVCERT.NL in Rotterdam on October 6.
Here's a video of a talk, "The Future of the Security Industry," I gave at an OWASP meeting in August in Minneapolis.
Someone has been charged with stealing 130 million credit card numbers.
Yes, it's a lot, but that's the sort of quantities credit card numbers come in. They come by the millions, in large database files. Even if you only want ten, you have to steal millions. I'm sure every one of us has a credit card in our wallet whose number has been stolen. It'll probably never be used for fraudulent purposes, but it's in some stolen database somewhere.
Years ago, when giving advice on how to avoid identity theft, I would tell people to shred their trash. Today, that advice is completely obsolete. No one steals credit card numbers one by one out of the trash when they can be stolen by the millions from merchant databases.
If there's actually a cult out there, I want to hear about it. In an essay by that name, John Viega writes about the dangers of relying on Applied Cryptography to design cryptosystems:
But, after many years of evaluating the security of software
I agree. And, to his credit, Viega points out that I agree:
But in the introduction to Bruce Schneier's book, Practical
This is all true.
Designing a cryptosystem is hard. Just as you wouldn't give a person -- even a doctor -- a brain-surgery instruction manual and then expect him to operate on live patients, you shouldn't give an engineer a cryptography book and then expect him to design and implement a cryptosystem. The patient is unlikely to survive, and the cryptosystem is unlikely to be secure.
Even worse, security doesn't provide immediate feedback. A dead patient on the operating table tells the doctor that maybe he doesn't understand brain surgery just because he read a book, but an insecure cryptosystem works just fine. It's not until someone takes the time to break it that the engineer might realize that he didn't do as good a job as he thought. Remember: Anyone can design a security system that he himself cannot break. Even the experts regularly get it wrong. The odds that an amateur will get it right are extremely low.
For those who are interested, a second edition of Practical Cryptography will be published in early 2010, renamed Cryptography Engineering and featuring a third author: Tadayoshi Kohno.
There are thousands of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.