February 15, 2007
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0702.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.
Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S. -- one in 145 -- and it becomes clear where the real risks are.
And the 1-in-375,000 chance is not today's risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.
So why are hospitals bothering with RFID bracelets? I think they're primarily to reassure the mothers. Many times during my friends' stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.
Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they're a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don't feel secure, and you can feel secure even though you're not really secure.
The RFID bracelets are what I've come to call security theater: security primarily designed to make you *feel* more secure. I've regularly maligned security theater as a waste, but it's not always, and not entirely, so.
It's only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases -- like with mothers and the threat of baby abduction -- a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.
Tamper-resistant packaging for over-the-counter drugs started to appear in the 1980s, in response to some highly publicized poisonings. As a countermeasure, it's largely security theater. It's easy to poison many foods and over-the-counter medicines right through the seal -- with a syringe, for example -- or to open and replace the seal well enough that an unwary consumer won't detect it. But in the 1980s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people's perceptions of the risk more in line with the actual risk: minimal.
Much of the post-9/11 security can be explained by this as well. I've often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn't have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it's OK to fly, it was probably the right thing to do.
Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It's not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren't worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.
Like real security, security theater has a cost. It can cost money, time, concentration, freedoms, and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.
We make smart security trade-offs -- and by this I mean trade-offs for genuine security -- when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines, and flying on airplanes -- closer to how secure we should feel if we had all the facts and did the math correctly.
Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others -- politicians, corporations and so on -- can use security theater to make us feel more secure without doing the hard work of actually making us secure. That's the usual way security theater is used, and why I so often malign it.
But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that's never going to work.
This essay appeared on Wired.com, and is dedicated to my new godson, Nicholas Quillen Perry.
Blog entry URL:
The argument was so obvious it hardly needed repeating. Some thought we would all be safer -- from terrorism, from crime, even from inconvenience -- if we had a better ID card. A good, hard-to-forge national ID is a no-brainer (or so the argument goes), and it's ridiculous that a modern country like the United States doesn't have one.
Still, most Americans have been and continue to be opposed to a national ID card. Even just after 9/11, polls showed a bare majority (51%) in favor -- and that quickly became a minority opinion again. As such, both political parties came out against the card, which meant that the only way it could become law was to sneak it through.
Republican Cong. F. James Sensenbrenner of Wisconsin did just that. In February 2005, he attached the Real ID Act to a defense appropriations bill. No one was willing to risk not supporting the troops by holding up the bill, and it became law. No hearings. No floor debate. With nary a whisper, the United States had a national ID.
By forcing all states to conform to common and more stringent rules for issuing driver's licenses, the Real ID Act turns these licenses into a de facto national ID. It's a massive, unfunded mandate imposed on the states, and -- naturally -- the states have resisted. The detailed rules and timetables are still being worked out by the Department of Homeland Security, and it's the details that will determine exactly how expensive and onerous the program actually is.
It is against this backdrop that the National Governors Association, the National Conference of State Legislatures, and the American Association of Motor Vehicle Administrators together tried to estimate the cost of this initiative. "The Real ID Act: National Impact Analysis" is a methodical and detailed report, and everything after the executive summary is likely to bore anyone but the most dedicated bean counters. But rigor is important because states want to use this document to influence both the technical details and timetable of Real ID. The estimates are conservative, leaving no room for problems, delays, or unforeseen costs, and yet the total cost is $11 billion over the first five years of the program.
If anything, it's surprisingly cheap: Only $37 each for an estimated 295 million people who would get a new ID under this program. But it's still an enormous amount of money. The question to ask is, of course: Is the security benefit we all get worth the $11 billion price tag? We have a cost estimate; all we need now is a security estimate.
I'm going to take a crack at it.
When most people think of ID cards, they think of a small plastic card with their name and photograph. This isn't wrong, but it's only a small piece of any ID program. What starts out as a seemingly simple security device -- a card that binds a photograph with a name -- becomes a complex security system.
It doesn't really matter how well a Real ID works when used by the hundreds of millions of honest people who would carry it. What matters is how the system might fail when used by someone intent on subverting that system: how it fails naturally, how it can be made to fail, and how failures might be exploited.
The first problem is the card itself. No matter how unforgeable we make it, it will be forged. We can raise the price of forgery, but we can't make it impossible. Real IDs will be forged.
Even worse, people will get legitimate cards in fraudulent names. Two of the 9/11 terrorists had valid Virginia driver's licenses in fake names. And even if we could guarantee that everyone who issued national ID cards couldn't be bribed, cards are issued based on other identity documents -- all of which are easier to forge.
And we can't assume that everyone will always have a Real ID. Currently about 20% of all identity documents are lost per year. An entirely separate security system would have to be developed for people who lost their card, a system that itself would be susceptible to abuse.
Additionally, any ID system involves people: people who regularly make mistakes. We've all heard stories of bartenders falling for obviously fake IDs, or sloppy ID checks at airports and government buildings. It's not simply a matter of training; checking IDs is a mind-numbingly boring task, one that is guaranteed to have failures. Biometrics such as thumbprints could help, but bring with them their own set of exploitable failure modes.
All of these problems demonstrate that identification checks based on Real ID won't be nearly as secure as we might hope. But the main problem with any strong identification system is that it requires the existence of a database. In this case, it would have to be 50 linked databases of private and sensitive information on every American -- one widely and instantaneously accessible from airline check-in stations, police cars, schools, and so on.
The security risks of this database are enormous. It would be a kludge of existing databases that are incompatible, full of erroneous data, and unreliable. Computer scientists don't know how to keep a database of this magnitude secure, whether from outside hackers or the thousands of insiders authorized to access it.
But even if we could solve all these problems, and within the putative $11 billion budget, we still wouldn't be getting very much security. A reliance on ID cards is based on a dangerous security myth, that if only we knew who everyone was, we could pick the bad guys out of the crowd.
In an ideal world, what we would want is some kind of ID that denoted intention. We'd want all terrorists to carry a card that said "evildoer" and everyone else to carry a card that said "honest person who won't try to hijack or blow up anything." Then security would be easy. We could just look at people's IDs, and, if they were evildoers, we wouldn't let them on the airplane or into the building.
This is, of course, ridiculous; so we rely on identity as a substitute. In theory, if we know who you are, and if we have enough information about you, we can somehow predict whether you're likely to be an evildoer. But that's almost as ridiculous.
Even worse, as soon as you divide people into two categories -- more trusted and less trusted people -- you create a third, and very dangerous, category: untrustworthy people whom we have no reason to mistrust. Oklahoma City bomber Timothy McVeigh; the Washington, DC, snipers; the London subway bombers; and many of the 9/11 terrorists had no previous links to terrorism. Evildoers can also steal the identity -- and profile -- of an honest person. Profiling can result in less security by giving certain people an easy way to skirt security.
There's another, even more dangerous, failure mode for these systems: honest people who fit the evildoer profile. Because evildoers are so rare, almost everyone who fits the profile will turn out to be a false alarm. Think of all the problems with the government's no-fly list. That list, which is what Real IDs will be checked against, not only wastes investigative resources that might be better spent elsewhere, but it also causes grave harm to those innocents who fit the profile.
Enough of terrorism; what about more mundane concerns like identity theft? Perversely, a hard-to-forge ID card can actually increase the risk of identity theft. A single ubiquitous ID card will be trusted more and used in more applications. Therefore, someone who does manage to forge one -- or get one issued in someone else's name -- can commit much more fraud with it. A centralized ID system is a far greater security risk than a decentralized one with various organizations issuing ID cards according to their own rules for their own purposes.
Security is always a trade-off; it must be balanced with the cost. We all do this intuitively. Few of us walk around wearing bulletproof vests. It's not because they're ineffective, it's because for most of us the trade-off isn't worth it. It's not worth the cost, the inconvenience, or the loss of fashion sense. If we were living in a war-torn country like Iraq, we might make a different trade-off.
Real ID is another lousy security trade-off. It'll cost the United States at least $11 billion, and we won't get much security in return. The report suggests a variety of measures designed to ease the financial burden on the states: extend compliance deadlines, allow manual verification systems, and so on. But what it doesn't suggest is the simple change that would do the most good: scrap the Real ID program altogether. For the price, we're not getting anywhere near the security we should.
This essay will appear in the March/April issue of "The Bulletin of Atomic Scientists."
The REAL-ID Act: National Impact Analysis:
There's REAL-ID news. Maine became the first state to reject REAL-ID. This means that a Maine state driver's license will not be recognized as valid for federal purposes, although I'm sure the Feds will back down over this. My guess is that Montana will become the second state to reject REAL-ID, and New Mexico will be the third.
More info on REAL-ID:
Crypto-Gram is currently in its tenth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.
Risks of Losing Portable Devices
Multi-Use ID Cards:
Countering "Trusting Trust"
TSA's Secure Flight
The Curse of the Secret Question:
Authentication and Expiration:
Toward Universal Surveillance:
The Politicization of Security:
Identification and Security:
The Economics of Spam:
Militaries and Cyber-War:
The RMAC Authentication Mode:
Microsoft and "Trustworthy Computing":
Hard-drive-embedded copy protection:
A semantic attack on URLs:
E-mail filter idiocy:
Internet voting vs. large-value e-commerce:
Distributed denial-of-service attacks:
Recognizing crypto snake-oil:
Full disclosure -- the practice of making the details of security vulnerabilities public -- is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.
Unfortunately, secrecy *sounds* like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers. The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.
But that assumes that hackers can't discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.
To understand why the second assumption isn't true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you -- the user -- much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.
Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies -- who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.
Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities "theoretical" and deny they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability -- and the company would release a really quick patch, apologize profusely, and go on to explain that the whole thing was entirely the fault of the evil, vile hackers.
It wasn't until researchers published complete details of the vulnerabilities that the software companies started fixing them.
Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.
So a bunch of software companies, and some security researchers, banded together and invented "responsible disclosure." The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.
This was a good idea -- and these days it's normal procedure -- but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.
The moral here doesn't just apply to software; it's very general. Public scrutiny is how security improves, whether we're talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us -- unless, of course, they knew about it beforehand -- but most of the time the benefits far outweigh the disadvantages.
Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn't improve security; it stifles it.
I'd rather have as much information as I can to make an informed decision about security, whether it's a buying decision about a software product or an election decision about two political parties. I'd rather have the information I need to pressure vendors to improve security.
I don't want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.
This essay originally appeared on CSOOnline:
It was part of a series of essays on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities:
These are sidebars to a very interesting article in "CSO Magazine," "The Chilling Effect," about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:
A Simplified Chinese translation by Xin Li:
Last month, Mayor Bloomberg announced that New York will be the first city with 911 call centers able to receive images and videos from cell phones and computers. If you witness a crime, you can not only call in -- you can send in a picture or video as well.
This is a great idea that can make us all safer. Often the biggest problem a 911 operator has is getting enough good information from the caller. Sometimes the caller is emotionally distraught. Sometimes there's confusion and background noise. Sometimes there's a language barrier. Giving callers the opportunity to use all the communications tools at their disposal will help operators dispatch the right help faster.
Still Images and videos can also help identify and prosecute criminals. Memories are notoriously inaccurate. Photos aren't perfect, but they provide a different sort of evidence -- one that, with the right safeguards, can be used in court.
The worry is that New York will become a city of amateur sleuths and snitches, turning each other in to settle personal scores or because of cultural misunderstandings. But the 911 service has long avoided such hazards. Falsely reporting a crime is itself a serious crime, which discourages people from using 911 for anything other than a true emergency.
Since 1968, the 911 system has evolved smartly with the times. Calls are now automatically recorded. Callers are now automatically located by phone number or cell phone location.
Bloomberg's plan is the next logical evolution -- one that all of us should welcome. Smile, suspected criminals: you're on candid camphone.
CLEAR, a private service that prescreens travelers for a $100 annual fee, has come to Kennedy International Airport. To benefit from the Clear Registered Traveler program, which is run by Verified Identity Pass, a person must fill out an application, let the service capture his fingerprints and iris pattern, and present two forms of identification. If the traveler passes a federal background check, he will be given a card that allows him to pass quickly through airport security.
Sounds great, but it's actually two ideas rolled into one: one clever and one very stupid.
The clever idea is allowing people to pay for better service. Clear has been in operation at the Orlando International Airport since July 2005, and members have passed through security checkpoints faster simply because they are segregated from less experienced fliers who don't know the drill.
Now, at Kennedy and other airports, Clear is purchasing and installing federally approved technology that will further speed up the screening process: scanners that will eliminate the need for cardholders to remove their shoes, and explosives detection machines that will eliminate the need for them to remove their coats and jackets. There are also Clear employees at the checkpoints who, although they can't screen cardholders, can guide members through the security process. Clear has not yet paid airports for an extra security lane or the Transportation Security Administration for extra screening personnel, but both of those enhancements are on the table if enough people sign up.
I fly more than 200,000 miles per year and would gladly pay $100 a year to get through airport security faster.
But the stupid idea is the background check. When first conceived, traveler programs focused on prescreening. Pre-approved travelers would pass through security checkpoints with less screening, and resources would be focused on everyone else. Sounds reasonable, but it would leave us all less safe.
Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn't any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn't even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.
And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.
The truth is that whenever you create two paths through security -- a high-security path and a low-security path -- you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.
I think of Clear as a $100 service that tells terrorists if the FBI is on to them or not. Why in the world would we provide terrorists with this ability?
We don't have to. Clear cardholders are not scrutinized less when they go through checkpoints, they're scrutinized more efficiently. So why not get rid of the background checks altogether? We should all be able to walk into the airport, pay $10, and use the Clear lanes when it's worth it to us.
Verified Identity Pass, Inc.
Non-terrorist embarrassment in Boston. I wrote about it in my blog the day after it happened.
Security theater and a secure data center:
Terrorists might bomb airplanes, take and kill hostages, and otherwise terrorize innocents. But there's one thing they just won't do: lie on government forms. And that's why the State of Ohio requires certain license (including private pilot license) applicants to certify that they're not terrorists. Because if we can't lock them up long enough for terrorism, we've got the additional charge of lying on a government form to throw at them.
FedEx refuses to ship empty containers: security theater at its finest.
The "Washington Post" on ubiquitous surveillance
RFID tattoos. Great idea for livestock. Dumb idea for soldiers:
Second in our series of stupid comments to the press, here's Kansas City's assistant city manager commenting on the fact that they lost 26 computer tapes containing personal information: "It's not a situation that if you had a laptop you could access.... You would need some specialized equipment and some specialized knowledge in order to read these tapes."
Dogbert's Password Recovery Service for Morons
Sir Ken Macdonald -- the UK's "director of public prosecutions" has spoken out against the "war on terror" rhetoric:
The Blu-ray DRM system has been broken, although details are scant. It's the same person who broke the HD DVD system in December. (Both use AACS.) As I've written previously, both of these systems are supposed to be designed in such a way as to recover from hacks like this. We're going to find out if the recovery feature works.
"Prophetic Justice" by Amy Waldman ("The Atlantic Monthly," Oct 2006) is a fascinating article about terrorism trials in the U.S. where the prosecution attempts to prove that the defendant was planning on committing an act of terrorism. Very often, the trials hinge on different interpretations of Islam, Islamic scripture, and Islamic belief -- and often we are essentially putting the religion on trial. Reading it, I was struck with some of the more extremist religious rhetoric in the U.S. today, and how it would fare under the same level of scrutiny. It's a long article, but well worth reading. There are many problems with prosecuting people for thoughtcrimes, and the article discusses some of them.
I've previously written about how official uniforms are inherent authentication tokens, even though they can be easy to forge.
Airport security game: Play online, and see if you can keep up with the ever-changing arbitrary rules.
"Internet Explorer Unsafe for 284 Days in 2006."
Excessive secrecy and security helps terrorists. I've said it, and now so has the director of the Canadian Security Intelligence Service:
Business models for discovering security vulnerabilities, both legal and illegal. There's a lot of FUD in this article, but also some good stuff.
Dave Barry on Super Bowl security:
Fascinating article on the Corsham bunker, the secret underground UK site the government was to retreat to in the event of a nuclear war.
Interesting data from New York. The number of people stopped and searched has gone up fivefold since 2002, but the number of arrests due to these stops has only doubled. (The number of "summonses" has also gone up fivefold.)
Three pipe bombs were found in the town of Pearblossom, California, and -- it seems -- disposed of without causing hysteria. Boston, are you paying attention?
Ross Anderson and Tyler Moore just published "The Economics of Information Security: A Survey and Open Questions." Excellent reading.
This article is a perfect illustrating of the wasteful, pork-barrel, political spending that we like to call "homeland security." And to think we could actually be spending this money on something useful.
We've all seen those anti-counterfeiting holograms: on credit cards, on software, on expensive apparel. Turns out they're getting easier to counterfeit.
BitFrost, the security system for the One Laptop Per Child project, is very interesting. At least read the design principles and design goals.
Here's an article on a brain scanning technique that reads people's intentions. There's not a lot of detail, but my guess is that it doesn't work very well. But that's not really the point. If it doesn't work today, it will in five, ten, twenty years; it will work eventually. What we need to do, today, is debate the legality and ethics of these sorts of interrogations.
Random number humor:
Windows Vista includes an array of "features" that you don't want. These features will make your computer less reliable and less secure. They'll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won't do anything useful. In fact, they're working against you. They're digital rights management (DRM) features built into Vista at the behest of the entertainment industry.
And you don't get to refuse them.
The details are pretty geeky, but basically Microsoft has reworked a lot of the core operating system to add copy protection technology for new media formats like HD DVD and Blu-ray disks. Certain high-quality output paths -- audio and video -- are reserved for protected peripheral devices. Sometimes output quality is artificially degraded; sometimes output is prevented entirely. And Vista continuously spends CPU time monitoring itself, trying to figure out if you're doing something that it thinks you shouldn't. If it does, it limits functionality and in extreme cases restarts just the video subsystem. We still don't know the exact details of all this, and how far-reaching it is, but it doesn't look good.
Microsoft put all those functionality-crippling features into Vista because it wants to own the entertainment industry. This isn't how Microsoft spins it, of course. It maintains that it has no choice, that it's Hollywood that is demanding DRM in Windows in order to allow "premium content"--meaning, new movies that are still earning revenue--onto your computer. If Microsoft didn't play along, it'd be relegated to second-class status as Hollywood pulled its support for the platform.
It's all complete nonsense. Microsoft could have easily told the entertainment industry that it was not going to deliberately cripple its operating system, take it or leave it. With 95% of the operating system market, where else would Hollywood go? Sure, Big Media has been pushing DRM, but recently some -- Sony after their 2005 debacle and now EMI Group -- are having second thoughts.
What the entertainment companies are finally realizing is that DRM doesn't work, and just annoys their customers. Like every other DRM system ever invented, Microsoft's won't keep the professional pirates from making copies of whatever they want. The DRM security in Vista was broken the day it was released. Sure, Microsoft will patch it, but the patched system will get broken as well. It's an arms race, and the defenders can't possibly win.
I believe that Microsoft knows this and also knows that it doesn't matter. This isn't about stopping pirates and the small percentage of people who download free movies from the Internet. This isn't even about Microsoft satisfying its Hollywood customers at the expense of those of us paying for the privilege of using Vista. This is about the overwhelming majority of honest users and who owns the distribution channels to them. And while it may have started as a partnership, in the end Microsoft is going to end up locking the movie companies into selling content in its proprietary formats.
We saw this trick before; Apple pulled it on the recording industry. First iTunes worked in partnership with the major record labels to distribute content, but soon Warner Music's CEO Edgar Bronfman Jr. found that he wasn't able to dictate a pricing model to Steve Jobs. The same thing will happen here; after Vista is firmly entrenched in the marketplace, Sony's Howard Stringer won't be able to dictate pricing or terms to Bill Gates. This is a war for 21st-century movie distribution and, when the dust settles, Hollywood won't know what hit them.
To be fair, just last week Steve Jobs publicly came out against DRM for music. It's a reasonable business position, now that Apple controls the online music distribution market. But Jobs never mentioned movies, and he is the largest single shareholder in Disney. Talk is cheap. The real question is would he actually allow iTunes Music Store purchases to play on Microsoft or Sony players, or is this just a clever way of deflecting blame to the--already hated--music labels.
Microsoft is reaching for a much bigger prize than Apple: not just Hollywood, but also peripheral hardware vendors. Vista's DRM will require driver developers to comply with all kinds of rules and be certified; otherwise, they won't work. And Microsoft talks about expanding this to independent software vendors as well. It's another war for control of the computer market.
Unfortunately, we users are caught in the crossfire. We are not only stuck with DRM systems that interfere with our legitimate fair-use rights for the content we buy, we're stuck with DRM systems that interfere with all of our computer use--even the uses that have nothing to do with copyright.
I don't see the market righting this wrong, because Microsoft's monopoly position gives it much more power than we consumers can hope to have. It might not be as obvious as Microsoft using its operating system monopoly to kill Netscape and own the browser market, but it's really no different. Microsoft's entertainment market grab might further entrench its monopoly position, but it will cause serious damage to both the computer and entertainment industries. DRM is bad, both for consumers and for the entertainment industry: something the entertainment industry is just starting to realize, but Microsoft is still fighting. Some researchers think that this is the final straw that will drive Windows users to the competition, but I think the courts are necessary.
Vista DRM hacked:
Steve Jobs on DRM:
This essay originally appeared on Forbes.com.
Rebecca Blood interviewed me for her "Bloggers on Blogging" series.
On June 10, 2006, I gave a talk at the ACLU New Jersey Membership Conference: "Counterterrorism in America: Security Theater Against Movie-Plot Threats." Here's the video (a little over an hour long).
Here's an interview I did with LinuxWorld. It was a verbal interview that they transcribed.
And a short interview with me for Information Week:
BT has bought INS. This is good for Counterpane, as our two companies have been partners for years. And, like all BT's private acquisitions, the purchase price is not public.
I just posted a long essay on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.
It's too long to include in this issue, and I will be sending it out to everyone in a special issue of Crypto-Gram on February 28. In the meantime, you can read an earlier draft of the essay by following the link below.
Other articles and commentary:
The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.
This matters. The phrase "one-way hash function" might sound arcane and geeky, but hash functions are the workhorses of modern cryptography. They provide web security in SSL. They help with key management in e-mail and voice encryption: PGP, Skype, all the others. They help make it harder to guess passwords. They're used in virtual private networks, help provide DNS security, and ensure that your automatic software updates are legitimate. They provide all sorts of security functions in your operating system. Every time you do something with security on the Internet, a hash function is involved somewhere.
Basically, a hash function is a fingerprint function. It takes a variable-length input -- anywhere from a single byte to a file terabytes in length -- and converts it to a fixed-length string: 20 bytes, for example.
One-way hash functions are supposed to have two properties. First, they're one-way. This means that it is easy to take an input and compute the hash value, but it's impossible to take a hash value and recreate the original input. By "impossible" I mean "can't be done in any reasonable amount of time."
Second, they're collision-free. This means that even though there are an infinite number of inputs for every hash value, you're never going to find two of them. Again, "never" is defined as above. The cryptographic reasoning behind these two properties is subtle, but any cryptographic text talks about them.
The hash function you're most likely to use routinely is SHA-1. Invented by the National Security Agency, it's been around since 1995. Recently, though, there have been some pretty impressive cryptanalytic attacks against the algorithm. The best attack is barely on the edge of feasibility, and not effective against all applications of SHA-1. But there's an old saying inside the NSA: "Attacks always get better; they never get worse." It's past time to abandon SHA-1.
There are near-term alternatives -- a related algorithm called SHA-256 is the most obvious -- but they're all based on the family of hash functions first developed in 1992. We've learned a lot more about the topic in the past 15 years, and can certainly do better.
Why the National Institute of Standards and Technology, or NIST, though? Because it has exactly the experience and reputation we want. We were in the same position with encryption functions in 1997. We needed to replace the Data Encryption Standard, but it wasn't obvious what should replace it. NIST decided to orchestrate a worldwide competition for a new encryption algorithm. There were 15 submissions from 10 countries -- I was part of the group that submitted Twofish -- and after four years of analysis and cryptanalysis, NIST chose the algorithm Rijndael to become the Advanced Encryption Standard, or AES.
The AES competition was the most fun I've ever had in cryptography. Think of it as a giant cryptographic demolition derby: A bunch of us put our best work into the ring, and then we beat on each other until there was only one standing. It was really more academic and structured than that, but the process stimulated a lot of research in block-cipher design and cryptanalysis. I personally learned an enormous amount about those topics from the AES competition, and we as a community benefited immeasurably.
NIST did a great job managing the AES process, so it's the perfect choice to do the same thing with hash functions. And it's doing just that. Last year and the year before, NIST sponsored two workshops to discuss the requirements for a new hash function, and last month it announced a competition to choose a replacement for SHA-1. Submissions will be due in fall 2008, and a single standard is scheduled to be chosen by the end of 2011.
Yes, this is a reasonable schedule. Designing a secure hash function seems harder than designing a secure encryption algorithm, although we don't know whether this is inherently true of the mathematics or simply a result of our imperfect knowledge. Producing a new secure hash standard is going to take a while. Luckily, we have an interim solution in SHA-256.
Now, if you'll excuse me, the Twofish team needs to reconstitute and get to work on an Advanced Hash Standard submission.
NIST Hash Competition:
This essay originally appeared on Wired.com.
Every time I write about one-way hash functions, I get responses from people claiming they can't possibly be secure because an infinite number of texts hash to the same short (160-bit, in the case of SHA-1) hash value. Yes, of course an infinite number of texts hash to the same value; that's the way the function works. But the odds of it happening naturally are less than the odds of all the air molecules bunching up in the corner of the room and suffocating you, and you can't force it to happen, either. Right now, several groups are trying to implement Xiaoyun Wang's attack against SHA-1. I predict one of them will find two texts that hash to the same value this year -- it will demonstrate that the hash function is broken and be really big news.
There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world's leading protector of networked information - the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2007 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.