December 15, 2008
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0812.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
Written right after the carnage:
I'm still reading about the Mumbai terrorist attacks, and I expect it'll be a long time before we get a lot of the details. What we know is horrific, and my sympathy goes out to the survivors of the dead (and the injured, who often seem to get ignored as people focus on death tolls). Without discounting the awfulness of the events, I have some initial observations:
* Low-tech is very effective. Movie-plot threats -- terrorists with crop dusters, terrorists with biological agents, terrorists targeting our water supplies -- might be what people worry about, but a bunch of trained (we don't really know yet what sort of training they had, but it's clear that they had some) men with guns and grenades is all they needed.
* At the same time, the attacks had a surprisingly low body count. I can't find exact numbers, but it seems there were about 18 terrorists. The latest toll is 195 dead, 235 wounded. That's 11 dead, 13 wounded, per terrorist. As horrible as the reality is, that's much less than you might have thought if you imagined the movie in your head. Reality is different from the movies.
* Even so, terrorism is rare. If a bunch of men with guns and grenades is all they really need, then why isn't this sort of terrorism more common? Why not in the U.S., where it's easy to get hold of weapons? It's because terrorism is very, very rare.
* Specific countermeasures don't help against these attacks. None of the high-priced countermeasures that defend against specific tactics and specific targets made, or would have made, any difference: photo ID checks, confiscating liquids at airports, fingerprinting foreigners at the border, bag screening on public transportation, anything. Even metal detectors and threat warnings didn't do any good.
If there's any lesson in these attacks, it's not to focus too much on the specifics of the attacks. Of course, that's not the way we're programmed to think. We respond to stories, not analysis. I don't mean to be unsympathetic; this tendency is human and these deaths are really tragic. But 18 armed people intent on killing lots of innocents will be able to do just that, and last-line-of-defense countermeasures won't be able to stop them. Intelligence, investigation, and emergency response. We have to find and stop the terrorists before they attack, and deal with the aftermath of the attacks we don't stop. There really is no other way, and I hope that we don't let the tragedy lead us into unwise decisions about how to deal with terrorism.
Our brains and stories:
Twitter was a vital source of information in Mumbai; people were using the site to communicate with and update others during the terrorist attacks. We simply have to be smarter than this idea: "And this morning, Twitter users said that Indian authorities was asking users to stop updating the site for security reasons. One person wrote: 'Police reckon tweeters giving away strategic info to terrorists via Twitter.'"
This fear is exactly backwards. During a terrorist attack -- during any crisis situation, actually -- the one thing people can do is exchange information. It helps people, calms people, and actually reduces the thing the terrorists are trying to achieve: terror. Yes, there are specific movie-plot scenarios where certain public pronouncements might help the terrorists, but those are rare. I would much rather err on the side of more information, more openness, and more communication.
The Mumbai terrorists used Google Earth to help plan their attacks. This is bothering some people:
"Google Earth has previously come in for criticism in India, including from the country's former president, A.P.J. Abdul Kalam.
"Kalam warned in a 2005 lecture that the easy availability online of detailed maps of countries from services such as Google Earth could be misused by terrorists."
Of course the terrorists used Google Earth. They also used boats, and ate at restaurants. Don't even get me started about the fact that they breathed air and drank water.
"A Google spokeswoman said in an e-mail today that Google Earth's imagery is available through commercial and public sources. Google Earth has also been used by aid agencies for relief operations, which outweighs abusive uses, she said."
That's true for all aspects of human infrastructure. Yes, the bad guys use it: bank robbers use cars to get away, drug smugglers use radios to communicate, child pornographers use e-mail. But the good guys use it, too, and the good uses far outweigh the bad uses.
As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt's immigration status. And in November, Verizon employees peeked at his cell phone records.
What these three incidents illustrate is not that computerized databases are vulnerable to hacking -- we already knew that, and anyway the perpetrators all had legitimate access to the systems they used -- but how important audit is as a security measure.
When we think about security, we commonly think about preventive measures: locks to keep burglars out of our homes, bank safes to keep thieves from our money, and airport screeners to keep guns and bombs off airplanes. We might also think of detection and response measures: alarms that go off when burglars pick our locks or dynamite open bank safes, sky marshals on airplanes who respond when a hijacker manages to sneak a gun through airport security. But audit, figuring out who did what after the fact, is often far more important than any of those other three.
Most security against crime comes from audit. Of course we use locks and alarms, but we don't wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that's audit.
Audit helps ensure that people don't abuse positions of trust. The cash register, for example, is basically an audit system. Cashiers have to handle the store's money. To ensure they don't skim from the till, the cash register keeps an audit trail of every transaction. The store owner can look at the register totals at the end of the day and make sure the amount of money in the register is the amount that should be there.
The same idea secures us from police abuse, too. The police have enormous power, including the ability to intrude into very intimate aspects of our life in order to solve crimes and keep the peace. This is generally a good thing, but to ensure that the police don't abuse this power, we put in place systems of audit like the warrant process.
The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What the government wanted was to not have to submit a warrant, even after the fact, to a secret FISA court. What they wanted was to not be subject to audit.
That would be an incredibly bad idea. Law enforcement systems that don't have good audit features designed in, or are exempt from this sort of audit-based oversight, are much more prone to abuse by those in power -- because they can abuse the system without the risk of getting caught. Audit is essential as the NSA increases its domestic spying. And large police databases, like the FBI Next Generation Identification System, need to have strong audit features built in.
For computerized database systems like that -- systems entrusted with other people's information -- audit is a very important security mechanism. Hospitals need to keep databases of very personal health information, and doctors and nurses need to be able to access that information quickly and easily. A good audit record of who accessed what when is the best way to ensure that those trusted with our medical information don't abuse that trust. It's the same with IRS records, credit reports, police databases, telephone records -- anything personal that someone might want to peek at during the course of his job.
Which brings us back to President Obama. In each of those three examples, someone in a position of trust inappropriately accessed personal information. The difference between how they played out is due to differences in audit. The State Department's audit worked best; they had alarm systems in place that alerted superiors when Obama's passport files were accessed and who accessed them. Verizon's audit mechanisms worked less well; they discovered the inappropriate account access and have narrowed the culprits down to a few people. Audit at Immigration and Customs Enforcement was far less effective; they still don't know who accessed the information.
Large databases filled with personal information, whether managed by governments or corporations, are an essential aspect of the information age. And they each need to be accessed, for legitimate purposes, by thousands or tens of thousands of people. The only way to ensure those people don't abuse the power they're entrusted with is through audit. Without it, we will simply never know who's peeking at what.
FBI's Next Generation Identification System:
This essay first appeared on the Wall Street Journal website.
The volume of spam dropped about 75% after a single hosting provider was unplugged. Spammers used that provider to control most of the zombie spam bots on the Internet.
People say that all cons rely on the mark's greed to work. But this short essay on the neuroscience of cons implies that greed is only a secondary factor.
A security trade-off: "Child-safety activists charge that some of the age-verification firms want to help Internet companies tailor ads for children. They say these firms are substituting one exaggerated threat -- the menace of online sex predators -- with a far more pervasive danger from online marketers like junk food and toy companies that will rush to advertise to children if they are told revealing details about the users." It's an old story: protecting against the rare and spectacular by making yourself more vulnerable to the common and pedestrian.
The Smithsonian had to figure out how to preserve a giant squid while following post-9/11 rules on flammable materials:
A database containing names of members of the far-right British National Party has been leaked:
Colleges aren't assigning enough homework these days; Victoria's Secret competition gets hacked.
1941 pencil-and-paper cipher:
Terrorism Survival Bundle for Windows Mobile. Seems not to be a joke.
In this story about luggage theft at Los Angeles International Airport, we find this interesting paragraph: "They both say there are organized rings of thieves, who identify valuables in your checked luggage by looking at the TSA X-ray screens, then communicate with baggage handlers by text or cell phone, telling them exactly what to look for." Someone should investigate the extent to which the TSA's security measures facilitate crime.
This is the story of a woman who sent Nigerian scammers $400K.
Jeffrey Goldberg on how to protect yourself from hotel terrorism. He points out: "my personal security guru, Bruce Schneier, says it's foolish even to worry about hotel safety, because the chances of something happening on any particular night in any particular hotel are vanishingly small. The taxi ride to the hotel is invariably more dangerous than the hotel itself." And I stand by that. But if you tend to stay in targeted hotels, the advice is pretty good.
Two years ago, all it took to bypass airport security was to fill out the right form:
This paper, "Terrorism-Related Fear and Avoidance Behavior in a Multiethnic Urban Population," is for subscribers only. The abstract is interesting, though:
This is a 2 Gig USB drive disguised as a piece of frayed cable. You'll still want to encrypt it, of course, but it is likely to be missed if your bags are searched at customs, the police raid your house, or you lose it.
Here's someone who claims that it's "impossible" to hack into radio-controlled thermostats because they're encrypted. Some people just don't understand security.
Jim Harper responds to my comments on fingerprinting foreigners at the border.
Killing robots being tested by Lockheed Martin:
A reporter managed to file legal papers, transferring ownership of the Empire State Building to himself. Yes, it's a stunt, but this sort of thing has been used to commit fraud in the past, and will continue to be a source of fraud in the future. The problem is that there isn't enough integrity checking to ensure that the person who is "selling" the real estate is actually the person who owns it.
Hollow coins -- cheap.
Some of you probably know that I am on the Board of Directors of EPIC, the Electronic Privacy Information Center. This organization does an amazing amount of good for the U.S. and the world with a suprisingly small budget, but they can always use more. If anyone is thinking about which charities to contribute to this month, I urge you to consider EPIC:
When he becomes president, Barack Obama will have to give up his BlackBerry. Aides are concerned that his unofficial conversations would become part of the presidential record, subject to subpoena and eventually made public as part of the country's historical record.
This reality of the information age might be particularly stark for the president, but it's no less true for all of us. Conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was just assumed.
This has changed. We chat in e-mail, over SMS and IM, and on social networking websites like Facebook, MySpace, and LiveJournal. We blog and we Twitter. These conversations -- with friends, lovers, colleagues, members of our cabinet -- are not ephemeral; they leave their own electronic trails.
We know this intellectually, but we haven't truly internalized it. We type on, engrossed in conversation, forgetting we're being recorded and those recordings might come back to haunt us later.
Oliver North learned this, way back in 1987, when messages he thought he had deleted were saved by the White House PROFS system, and then subpoenaed in the Iran-Contra affair. Bill Gates learned this in 1998 when his conversational e-mails were provided to opposing counsel as part of the antitrust litigation discovery process. Mark Foley learned this in 2006 when his instant messages were saved and made public by the underage men he talked to. Paris Hilton learned this in 2005 when her cell phone account was hacked, and Sarah Palin learned it earlier this year when her Yahoo e-mail account was hacked. Someone in George W. Bush's administration learned this, and millions of e-mails went mysteriously and conveniently missing.
Ephemeral conversation is dying.
Cardinal Richelieu famously said, "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." When all our ephemeral conversations can be saved for later examination, different rules have to apply. Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a Blackberry, are not official pronouncements. Discussions in a meeting, whether held in a boardroom or a chat room, are not the same as answers at a press conference. And privacy isn't just about having something to hide; it has enormous value to democracy, liberty, and our basic humanity.
We can't turn back technology; electronic communications are here to stay and even our voice conversations are threatened. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard ephemeral conversation. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and delete it as soon as it is no longer needed. Laws requiring ISPs to store e-mails and other personal communications are exactly what we don't need.
Rules pertaining to government need to be different, because of the power differential. Subjecting the president's communications to eventual public review increases liberty because it reduces the government's power with respect to the people. Subjecting our communications to government review decreases liberty because it reduces our power with respect to the government. The president, as well as other members of government, need some ability to converse ephemerally -- just as they're allowed to have unrecorded meetings and phone calls -- but more of their actions need to be subject to public scrutiny.
But laws can only go so far. Law or no law, when something is made public it's too late. And many of us like having complete records of all our e-mail at our fingertips; it's like our offline brains.
In the end, this is cultural.
The Internet is the greatest generation gap since rock and roll. We're now witnessing one aspect of that generation gap: the younger generation chats digitally, and the older generation treats those chats as written correspondence. Until our CEOs blog, our Congressmen Twitter, and our world leaders send each other LOLcats -- until we have a Presidential election where both candidates have a complete history on social networking sites from before they were teenagers -- we aren't fully an information age society.
When everyone leaves a public digital trail of their personal thoughts since birth, no one will think twice about it being there. Obama might be on the younger side of the generation gap, but the rules he's operating under were written by the older side. It will take another generation before society's tolerance for digital ephemera changes.
Obama and his BlackBerry:
The value of privacy:
Mutual disclosure and power:
This essay previously appeared on the Wall Street Journal website:
This essay is an update of one I wrote previously:
In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don't people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market's transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense. But if they are too high, it makes more sense to get the job done by an organization that hires people.
Economists have long understood the corollary concept of Coase's ceiling, a point above which organizations collapse under their own weight -- where hiring someone, however competent, means more work for everyone else than the new hire contributes. Software projects often bump their heads against Coase's ceiling: recall Frederick P. Brooks Jr.'s seminal study, "The Mythical Man-Month" (Addison-Wesley, 1975), which showed how adding another person onto a project can slow progress and increase errors.
What's new is something consultant and social technologist Clay Shirky calls "Coase's Floor," below which we find projects and activities that aren't worth their organizational costs -- things so esoteric, so frivolous, so nonsensical, or just so thoroughly unimportant that no organization, large or small, would ever bother with them. Things that you shake your head at when you see them and think, "That's ridiculous."
Sounds a lot like the Internet, doesn't it? And that's precisely Shirky's point. His new book, "Here Comes Everybody: The Power of Organizing Without Organizations," explores a world where organizational costs are close to zero and where ad hoc, loosely connected groups of unpaid amateurs can create an encyclopedia larger than the Britannica and a computer operating system to challenge Microsoft's.
Shirky teaches at New York University's Interactive Telecommunications Program, but this is no academic book. Sacrificing rigor for readability, "Here Comes Everybody" is an entertaining as well as informative romp through some of the Internet's signal moments -- the Howard Dean phenomenon, Belarusian protests organized on LiveJournal, the lost cell phone of a woman named Ivanna, Meetup.com, flash mobs, Twitter, and more -- which Shirky uses to illustrate his points.
The book is filled with bits of insight and common sense, explaining why young people take better advantage of social tools, how the Internet affects social change, and how most Internet discourse falls somewhere between dinnertime conversation and publishing.
Shirky notes that "most user-generated content isn't 'content' at all, in the sense of being created for general consumption, any more than a phone call between you and a sibling is 'family-generated content.' Most of what gets created on any given day is just the ordinary stuff of life -- gossip, little updates, thinking out loud -- but now it's done in the same medium as professionally produced material. Unlike professionally produced material, however, Internet content can be organized after the fact."
No one coordinates Flickr's 6 million to 8 million users. Yet Flickr had the first photos from the 2005 London Transport bombings, beating the traditional news media. Why? People with cell phone cameras uploaded their photos to Flickr. They coordinated themselves using tools that Flickr provides. This is the sort of impromptu organization the Internet is ideally suited for. Shirky explains how these moments are harbingers of a future that can self-organize without formal hierarchies.
These nonorganizations allow for contributions from a wider group of people. A newspaper has to pay someone to take photos; it can't be bothered to hire someone to stand around London underground stations waiting for a major event. Similarly, Microsoft has to pay a programmer full time, and "Encyclopedia Britannica" has to pay someone to write articles. But Flickr can make use of a person with just one photo to contribute, Linux can harness the work of a programmer with little time, and Wikipedia benefits if someone corrects just a single typo. These aggregations of millions of actions that were previously below the Coasean floor have enormous potential.
But a flash mob is still a mob. In a world where the Coasean floor is at ground level, all sorts of organizations appear, including ones you might not like: violent political organizations, hate groups, Holocaust deniers, and so on. (Shirky's discussion of teen anorexia support groups makes for very disturbing reading.) This has considerable implications for security, both online and off.
We never realized how much our security could be attributed to distance and inconvenience -- how difficult it is to recruit, organize, coordinate, and communicate without formal organizations. That inadvertent measure of security is now gone. Bad guys, from hacker groups to terrorist groups, will use the same ad hoc organizational technologies that the rest of us do. And while there has been some success in closing down individual Web pages, discussion groups, and blogs, these are just stopgap measures.
In the end, a virtual community is still a community, and it needs to be treated as such. And just as the best way to keep a neighborhood safe is for a policeman to walk around it, the best way to keep a virtual community safe is to have a virtual police presence.
Crime isn't the only danger; there is also isolation. If people can segregate themselves in ever-increasingly specialized groups, then they're less likely to be exposed to alternative ideas. We see a mild form of this in the current political trend of rival political parties having their own news sources, their own narratives, and their own facts. Increased radicalization is another danger lurking below the Coasean floor.
There's no going back, though. We've all figured out that the Internet makes freedom of speech a much harder right to take away. As Shirky demonstrates, Web 2.0 is having the same effect on freedom of assembly. The consequences of this won't be fully seen for years.
"Here Comes Everybody" covers some of the same ground as Yochai Benkler's "Wealth of Networks". But when I had to explain to one of my corporate attorneys how the Internet has changed the nature of public discourse, Shirky's book is the one I recommended.
Clay Shirky podcast:
This essay previously appeared in "IEEE Spectrum."
Schneier is speaking at a CATO Institute conference, Shaping the New Administration's Counterterrorism Strategy, on 12-13 January in Washington, DC.
Schneier wrote an essay for the Guardian; it also appeared in The Hindu.
Schneier interview from Datamation:
It's another unsubstantiated terrorist plot.
Read the article: "plausible but unsubstantiated," "may have discussed attacking the subway system," "specific details to confirm that this plot has developed beyond aspirational planning," "attack could possibly be conducted," "it's plausible, but there's no evidence yet that it's in the process of being carried out."
I have no specific details, but I want to warn everybody today that fiery rain might fall from the sky. Terrorists may have discussed this sort of tactic, possibly at one of their tequila-fueled aspirational planning sessions. While there is no evidence yet that the plan is in the process of being carried out, I want to be extra-cautious this holiday season. Ho ho ho.
On a couple of websites, people have suggested that I be appointed TSA administrator. For the record, I don't want the job.
I don't want it because it's too narrow. I think the right thing for the government to do is to give the TSA a lot less money. I'd rather they defend against the broad threat of terrorism than focus on the narrow threat of airplane terrorism, and I'd rather they defend against the myriad of threats that face our society than focus on the singular threat of terrorism. But the head of the TSA can't have those opinions; he has to take the money he's given and perform the specific function he's assigned to perform. Not very much fun, really.
But I'd be happy to advise whoever Obama chooses to head the TSA.
The job of the nation's CTO would be more interesting, but I don't think I want it, either. (Have you seen the screening process?)
NIST has published all 51 first-round candidates in its hash algorithm competition. (The other submissions -- we heard they received 64 -- were rejected because they weren't complete.) Their goal is to publish the accepted submissions by the end of the month, in advance of the Third Cryptographic Hash Workshop to be held in Belgium right after FSE in February. They expect to quickly make a first cut of algorithms -- hopefully to about a dozen -- and then give the community about a year of cryptanalysis before making a second cut in 2010.
You can download the submission package for any of the candidates from the NIST page. The SHA-3 Zoo is still the best source for up-to-date cryptanalysis information. Various people have been trying to benchmark the performance of the candidates, but -- of course -- results depend on what metrics you choose.
There are two bugs in the Skein code. They are subtle and esoteric, but they're there. We have revised both the reference and optimized code -- and provided new test vectors -- on the Skein website. A revision of the paper -- Version 1.1 -- has new IVs, new test vectors, and also fixes a few typos.
"Errata: Version 1.1 of the paper, reference, and optimized code corrects an error in which the length of the configuration string was passed in as the size of the internal block (256 bits for Skein-256, 512 for Skein-512, and 1024 for Skein-1024), instead of a constant 256 bits for all three sizes. This error has no cryptographic significance, but affected the test vectors and the initialization values. The revised code also fixes a bug in the MAC mode key processing. This bug does not affect the NIST submission in any way."
There's also news about Skein's performance. And two Java implementations. (Does anyone want to do an implementation of Threefish?) In general, the Skein website is the place to go for up-to-date Skein information.
Lastly, DarkReading says some really nice things about Skein.
"These submissions make some accommodation to the Core 2 processor. They operate in 'little-endian' mode (a quirk of the Intel-like processors that reads some bytes in reverse order). They also allow a large file to be broken into chunks to split the work across multiple processors.
"However, virtually all of the contest submissions share the performance problem mentioned above. The logic they use won't optimally fit within the constraints of a Intel Core 2 processor. Most will perform as bad or worse than the existing SHA-1 algorithm.
"One exception to this is Skein, created by several well-known cryptographers and noted pundit Bruce Schneier. It was designed specifically to exploit all three of the Core 2 execution units and to run at a full 64-bits. This gives it roughly four to 10 times the logic density of competing submissions.
"This is what I meant by the Matrix quote above. They didn't bend the spoon; they bent the crypto algorithm. They moved the logic operations around in a way that wouldn't weaken the crypto, but would strengthen its speed on the Intel Core 2.
"In their paper, the authors of Skein express surprise that a custom silicon ASIC implementation is not any faster than the software implementation. They shouldn't be surprised. Every time you can redefine a problem to run optimally in software, you will reach the same speeds you get with optimized ASIC hardware. The reason software has a reputation of being slow is because people don't redefine the original problem."
That's exactly what we were trying to do.
My Wired essay on the process:
There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2008 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.