Crypto-Gram

August 15, 2008

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0808.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Memo to the Next President

Obama has a cyber security plan.

It’s basically what you would expect: Appoint a national cyber security advisor, invest in math and science education, establish standards for critical infrastructure, spend money on enforcement, establish national standards for securing personal data and data-breach disclosure, and work with industry and academia to develop a bunch of needed technologies.

I could comment on the plan, but with security the devil is always in the details—and, of course, at this point there are few details. But since he brought up the topic—McCain supposedly is “working on the issues” as well—I have three pieces of policy advice for the next president, whoever he is. They’re too detailed for campaign speeches or even position papers, but they’re essential for improving information security in our society. Actually, they apply to national security in general. And they’re things only government can do.

One, use your immense buying power to improve the security of commercial products and services. One property of technological products is that most of the cost is in the development of the product rather than the production. Think software: The first copy costs millions, but the second copy is free.

You have to secure your own government networks, military and civilian. You have to buy computers for all your government employees. Consolidate those contracts, and start putting explicit security requirements into the RFPs. You have the buying power to get your vendors to make serious security improvements in the products and services they sell to the government, and then we all benefit because they’ll include those improvements in the same products and services they sell to the rest of us. We’re all safer if information technology is more secure, even though the bad guys can use it, too.

Two, legislate results and not methodologies. There are a lot of areas in security where you need to pass laws, where the security externalities are such that the market fails to provide adequate security. For example, software companies who sell insecure products are exploiting an externality just as much as chemical plants that dump waste into the river. But a bad law is worse than no law. A law requiring companies to secure personal data is good; a law specifying what technologies they should use to do so is not. Mandating software liabilities for software failures is good, detailing how is not. Legislate for the results you want and implement the appropriate penalties; let the market figure out how—that’s what markets are good at.

Three, broadly invest in research. Basic research is risky; it doesn’t always pay off. That’s why companies have stopped funding it. Bell Labs is gone because nobody could afford it after the AT&T breakup, but the root cause was a desire for higher efficiency and short-term profitability—not unreasonable in an unregulated business. Government research can be used to balance that by funding long-term research.

Spread those research dollars wide. Lately, most research money has been redirected through DARPA to near-term military-related projects; that’s not good. Keep the earmark-happy Congress from dictating how the money is spent. Let the NSF, NIH and other funding agencies decide how to spend the money and don’t try to micromanage. Give the national laboratories lots of freedom, too. Yes, some research will sound silly to a layman. But you can’t predict what will be useful for what, and if funding is really peer-reviewed, the average results will be much better. Compared to corporate tax breaks and other subsidies, this is chump change.

If our research capability is to remain vibrant, we need more science and math students with decent elementary and high school preparation. The declining interest is partly from the perception that scientists don’t get rich like lawyers and dentists and stockbrokers, but also because science isn’t valued in a country full of creationists. One way the president can help is by trusting scientific advisers and not overruling them for political reasons.

Oh, and get rid of those post-9/11 restrictions on student visas that are causing so many top students to do their graduate work in Canada, Europe and Asia instead of in the United States. Those restrictions will hurt us immensely in the long run.

Those are the three big ones; the rest is in the details. And it’s the details that matter. There are lots of serious issues that you’re going to have to tackle: data privacy, data sharing, data mining, government eavesdropping, government databases, use of Social Security numbers as identifiers, and so on. It’s not enough to get the broad policy goals right. You can have good intentions and enact a good law, and have the whole thing completely gutted by two sentences sneaked in during rulemaking by some lobbyist.

Security is both subtle and complex, and—unfortunately—doesn’t readily lend itself to normal legislative processes. You’re used to finding consensus, but security by consensus rarely works. On the internet, security standards are much worse when they’re developed by a consensus body, and much better when someone just does them. This doesn’t always work—a lot of crap security has come from companies that have “just done it”—but nothing but mediocre standards come from consensus bodies. The point is that you won’t get good security without pissing someone off: The information broker industry, the voting machine industry, the telcos. The normal legislative process makes it hard to get security right, which is why I don’t have much optimism about what you can get done.

And if you’re going to appoint a cybersecurity czar, you have to give him actual budgetary authority. Otherwise he won’t be able to get anything done, either.

Obama’s plan:
http://www.barackobama.com/2008/07/16/…
http://www.barackobama.com/2008/07/16/…
McCain:
http://www.scmagazineus.com/…
Dual-use technologies:
https://www.schneier.com/blog/archives/2008/05/…

Good legislation:
http://www.schneier.com/essay-141.html
https://www.schneier.com/blog/archives/2007/01/…

Liabilities:
http://www.schneier.com/essay-025.html
http://www.schneier.com/essay-116.html

Research redirected through DARPA:
http://query.nytimes.com/gst/fullpage.html?…
Congressional earmarks:
http://www.ostp.gov/pdf/1pger_earmark.pdf

Student visa problems:
http://www7.nationalacademies.org/visas/…
http://www.aau.edu/research/Gast.pdf

This essay originally appeared on Wired.com:
http://www.wired.com/politics/security/commentary/…


TSA Proud of Confiscating Non-Dangerous Item

This is just sad. The TSA confiscated a battery pack not because it’s dangerous, but because other passengers might *think* it’s dangerous. And they’re proud of the fact.

My guess is that if Kip Hawley were allowed to comment on my blog, he would say something like this: “It’s not just bombs that are prohibited; it’s things that look like bombs. This looks enough like a bomb to fool the other passengers, and that in itself is a threat.”

Okay, that’s fair. But the average person doesn’t know what a bomb looks like; all he knows is what he sees on television and the movies. And this rule means that all homemade electronics are confiscated, because anything homemade with wires can look like a bomb to someone who doesn’t know better. The rule just doesn’t work.

And in today’s passengers-fight-back world, do you think anyone is going to successfully do anything with a fake bomb?

Late Note: the TSA webpage has been updated; they admit that they overreacted.

http://www.tsa.gov/press/happenings/scot_peele.shtm


Homeland Security Cost-Benefit Analysis

In an excellent paper by Ohio State political science professor John Mueller, “The Quixotic Quest for Invulnerability: Assessing the Costs, Benefits, and Probabilities of Protecting the Homeland,” there are some common sense premises and policy implications.

The premises:

“1. The number of potential terrorist targets is essentially infinite.

“2. The probability that any individual target will be attacked is essentially zero.

“3. If one potential target happens to enjoy a degree of protection, the agile terrorist usually can readily move on to another one.

“4. Most targets are ‘vulnerable’ in that it is not very difficult to damage them, but invulnerable in that they can be rebuilt in fairly short order and at tolerable expense.

“5. It is essentially impossible to make a very wide variety of potential terrorist targets invulnerable except by completely closing them down.”

The policy implications:

“1. Any protective policy should be compared to a “null case”: do nothing, and use the money saved to rebuild and to compensate any victims.

“2. Abandon any effort to imagine a terrorist target list.

“3. Consider negative effects of protection measures: not only direct cost, but inconvenience, enhancement of fear, negative economic impacts, reduction of liberties.

“4. Consider the opportunity costs, the tradeoffs, of protection measures.”

The whole paper is worth reading.

http://psweb.sbs.ohio-state.edu/faculty/jmueller/…


News

A disgruntled employee holds the San Francisco computer network hostage, proving that trusted insiders can do a lot of damage.
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/…
http://www.darkreading.com/.asp?…
http://www.computerworld.com/action/article.do?…
Locksmiths hate computer geeks who learn lockpicking.
http://www.theglobeandmail.com/servlet/story/…
http://www.crypto.com/papers/safelocks.pdf

Funny radio skit on identity theft, by Mitchell & Webb.
http://www.youtube.com/watch?v=CS9ptA3Ya9E

This report, “Assessing the risks, costs and benefits of United States aviation security measures” by Mark Stewart and John Mueller, is excellent reading. Reinforcing the cockpit door is cost effective; sky marshals are not. The final paper will eventually be published in the Journal of Transportation Security. I never even knew there was such a thing.
http://hdl.handle.net/1959.13/28097
New York Times op-ed on the same subject:
http://www.nytimes.com/2008/07/21/opinion/…

Who can not feel a little chill of fear after reading this: “Britain on alert for deadly new knife with exploding tip that freezes victims’ organs.” Yes, it’s real. The knife is designed for people who need to drop large animals quickly: sharks, bears, etc.
http://www.dailymail.co.uk/news/article-1035729/…
http://www.waspknife.com/
I have no idea why Britain is on alert for it. Maybe because knife crimes are on the rise.
http://www.nytimes.com/2008/07/17/world/europe/…

A high-level British government employee supposedly had his BlackBerry stolen by Chinese intelligence. But the story doesn’t make sense. If you’re a Chinese intelligence officer and you manage to get an aide to the British Prime Minister to have sex with one of your agents, you’re not going to immediately burn him by stealing his BlackBerry. That’s just stupid. If anything, you’d clone the Blackberry and return it. This is much more likely to be petty theft.
http://www.timesonline.co.uk/tol/news/politics/…

Clever Washington DC metro Farecard hack:
http://www.washingtonpost.com/wp-dyn/content/…
In this article about British speed cameras, and a trick to avoid them that does not work, is this sentence: “As vehicles pass between the entry and exit camera points their number plates are digitally recorded, whether speeding or not.” Without knowing more, I can guarantee that those records are kept forever.
http://www.theregister.co.uk/2008/07/21/…

Here’s someone in the UK, a passenger in a car, who moons a speeding camera and gets his picture published even though the car was not speeding. How did they know to look at the picture in the first place?
http://news.bbc.co.uk/1/hi/england/tyne/7378695.stm

They were confiscating sunscreen at Yankee Stadium as an anti-terrorism measure. This story has a happy ending, though. A day after The New York Post published this story, Yankee Stadium reversed its ban. Now, if only the Post had that same effect on airport security.
http://www.nypost.com/seven/07222008/news/…
https://www.schneier.com/blog/archives/2008/06/…
http://www.salon.com/sports/daily/?last_story=/…
Adeona is an open source laptop tracking service.
http://adeona.cs.washington.edu/index.html
http://www.pcworld.com/businesscenter/article/…
From a Washington Post article on terrorist plots, comes this quote: “Batiste confided, somewhat fantastically, that he wanted to blow up the Sears Tower in Chicago, which would then fall into a nearby prison, freeing Muslim prisoners who would become the core of his Moorish army. With them, he would establish his own country.” *Somewhat* fantastically? What would the Washington Post consider to be truly fantastic? A plan involving Godzilla? Clearly they have some very high standards. I’m sick of people taking these idiots seriously. This plot is beyond fantastic, it’s delusional.
http://www.washingtonpost.com/wp-dyn/content/…
https://www.schneier.com/blog/archives/2007/06/…

SanDisk has introduced Write-Once Read-Many Memory (WORM) cards for forensic applications.
http://www.sandisk.com/Corporate/PressRoom/…
Great World War II deception story in an obituary of former OSS agent Roger Hall. Hall’s book about his OSS days, “You’re Stepping on My Cloak and Dagger,” is a must-read.
http://www.philly.com/inquirer/obituaries/…
Video demonstrating how easy it is to social engineer your way into clubs by pretending you’re the DJ.
http://www.5min.com/Video/…

3,000 blank British passports stolen. Looks like an inside job to me.
http://www.time.com/time/world/article/…
http://www.foxnews.com/story/0,2933,393581,00.html
http://news.sky.com/skynews/Home/Politics/…
This is an engaging and fascinating video presentation by Professor James Duane of the Regent University School of Law, explaining why—in a criminal matter—you should never, ever, ever talk to the police or any other government agent. It doesn’t matter if you’re guilty or innocent, if you have an alibi or not—it isn’t possible for anything you say to help you, and it’s very possible that innocuous things you say will hurt you. Definitely worth half an hour of your time.
http://video.google.com/videoplay?…
And this is a video of Virginia Beach Police Department Officer George Bruch, who basically says that Duane is right.
http://video.google.com/videoplay?…

Remember when I said that I keep my home wireless network open? Here’s a reason not to listen to me. “When Indian police investigating bomb blasts which killed 42 people traced an email claiming responsibility to a Mumbai apartment, they ordered an immediate raid. But at the address, rather than seizing militants from the Islamist group which said it carried out the attack, they found a group of puzzled American expats.” Of course, the terrorists could have sent the e-mail from anywhere. But life is easier if the police don’t raid *your* apartment.
http://www.guardian.co.uk/world/2008/jul/29/…
https://www.schneier.com/blog/archives/2008/01/…

Suspect in 2001 anthrax attacks kill self. Fascinating stuff, although this early story leaves me with more questions than answers.
http://www.cnn.com/2008/CRIME/08/01/…

The U.S. government has published its policy for seizing laptops at borders: they can take your laptop anywhere they want, for as long as they want, and share the information with anyone they want.
http://www.washingtonpost.com/wp-dyn/content/…
http://www.cbp.gov/linkhandler/cgov/travel/…
http://yro.slashdot.org/yro/08/08/01/0958242.shtml
http://www.schneier.com/essay-217.html

Schneier misquote:
https://www.schneier.com/blog/archives/2008/08/…

Good perspective on Gary McKinnon’s extradition to the United States.
http://www.guardian.co.uk/commentisfree/2008/aug/01/…
Italians use soldiers to prevent crime. More security theater than anything else.
http://www.nytimes.com/2008/08/05/world/europe/…

Laptop with Trusted Traveler identities lost, presumed stolen, and then found.
http://www.orlandosentinel.com/business/…
http://cbs5.com/local/tsa.security.clear.2.788083.html
http://www.tsa.gov/press/releases/2008/0804.shtm
https://www.schneier.com/blog/archives/2007/01/…
https://www.schneier.com/blog/archives/2008/06/…
https://www.schneier.com/blog/archives/2006/11/…
http://www.sfgate.com/cgi-bin/article.cgi?f=/n/a/…
My essay on Trusted Traveler:
http://www.schneier.com/essay-199.html

Lots of NSA forms, obtained via the Freedom of Information Act:
http://www.thememoryhole.org/2008/07/…

Security idiocy story from the Dilbert blog:
http://dilbert.com/blog/entry/true_story/

These indictments against the largest ID theft ring ever were really big news, but I don’t think it’s that much of a big deal. These crimes are still easy to commit and it’s still too hard to catch the criminals. Catching one gang, even a large one, isn’t going to make us any safer.
http://www.washingtonpost.com/wp-dyn/content/…
http://money.cnn.com/2008/08/05/news/companies/…
http://technology.timesonline.co.uk/tol/news/world/…
http://www.iht.com/articles/ap/2008/08/06/business/…
http://www.theregister.co.uk/2008/08/06/…
http://ap.google.com/article/…
If we want to mitigate identity theft, we have to make it harder for people to get credit, make transactions, and generally do financial business remotely.
https://www.schneier.com/blog/archives/2005/04/…

The headline says it all: “‘Fakeproof’ e-passport is cloned in minutes.”
http://www.timesonline.co.uk/tol/news/uk/crime/…
http://www.schneier.com/essay-125.html

DMCA does not apply to the U.S. government:
http://arstechnica.com/news.ars/post/…
Random killing on a Canadian Greyhound bus, and the predictable security overreaction:
https://www.schneier.com/blog/archives/2008/08/…

The Onion: Are the Chinese Olympics a trap?
http://www.theonion.com/content/video/…

Amber Alerts as security theater:
http://www.boston.com/bostonglobe/ideas/articles/…

Bypassing Microsoft Vista’s memory protection:
http://searchsecurity.techtarget.com/news/article/…
http://taossa.com/archive/bh08sotirovdowd.pdf
http://arstechnica.com/news.ars/post/…
Seems like the procedure has changed for flying without ID. Now they ask personal questions from your credit history.
http://philosecurity.org/2008/08/10/…
This only works if you’ve lost your ID, not if you refuse to show it.
https://www.schneier.com/blog/archives/2008/06/…

The UK has made public its previously classified National Risk Register. Seems like the greatest threat to national security is a flu pandemic.
http://www.cabinetoffice.gov.uk/reports/…

Interesting paper on the risk of anthrax as a terrorist weapon:
http://www.stratfor.com/weekly/busting_anthrax_myth

I don’t know the details, but detecting pump and dump scams seems like a really good use of data mining.
http://news.bbc.co.uk/1/hi/technology/7552009.stm
http://news.yahoo.com/s/zd/20080811/tc_zd/230711
Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms.
https://www.schneier.com/blog/archives/2006/03/…

Over-hyping risks against children, and the effectiveness of giving them cell phones:
http://www.cnn.com/2008/TECH/ptech/08/11/…

The UK police seized a copy of the War on Terror board game because—and it’s almost too stupid to believe—the balaclava “could be used to conceal someone’s identity or could be used in the course of a criminal act.” Don’t they realize that balaclavas are for sale everywhere in the UK? Or that scarves, hoods, handkerchiefs, and dark glasses could also be used to conceal someone’s identity?
http://www.cambridge-news.co.uk/cn%5Fnews%5Fhome/…
Sounds like a fun game, though:
http://www.waronterrortheboardgame.com/


Hacking Mifare Transport Cards

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well—Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro—and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security—in ID cards, in voting machines, in airport security—it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.

Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.

News:
http://www.guardian.co.uk/technology/2008/jun/26/…
http://www.ru.nl/ds/research/rfid/
http://technology.timesonline.co.uk/tol/news/…
http://www.youtube.com/watch?v=NW3RGbQTLhE
http://news.cnet.com/8301-10784_3-9985886-7.html?…
http://www.secureidnews.com/news/2008/07/10/…
http://news.cnet.co.uk/software/…
http://www.techradar.com/news/world-of-tech/…
One of the papers:
http://www.cs.ru.nl/~flaviog/publications/…

Dutch court ruling:
http://zoeken.rechtspraak.nl/resultpage.aspx?…
Secrecy and security:
http://www.schneier.com/crypto-gram-0205.html#1

Other research on Mifare:
http://www.computerworld.com/action/article.do?…
http://www.cs.virginia.edu/~evans/pubs/usenix08/
http://eprint.iacr.org/2008/166
http://staff.science.uva.nl/~delaat/sne-2006-2007/…
http://www.translink.nl/media/bijlagen/nieuws/…
Chinese compatible chip:
http://www.fmsh.com/english/product_chipcard.php?…
http://www.fmsh.com/english/products/…

This essay originally appeared in the Guardian.
http://www.guardian.co.uk/technology/2008/aug/07/…


Information Security and Liabilities

A recent study of Internet browsers worldwide discovered that over half—52%—of Internet Explorer users weren’t using the current version of the software. For other browsers the numbers were better, but not much: 17% of Firefox users, 35% of Safari users, and 44% of Opera users were using an old version.

This is particularly important because browsers are an increasingly common vector for internet attacks, and old versions of browsers don’t have all their security patches up to date. They’re open to attack through vulnerabilities the vendors have already fixed.

Security professionals are quick to blame users who don’t use the latest update and install every patch. “Keeping up is critical for security,” they say, and “if someone doesn’t update their system, it’s their own fault that they get hacked.” This sounds a lot like blaming the victim: “He should have known not to walk down that deserted street; it’s his own fault he was mugged.” Of course the victim could have—and quite possibly should have—taken further precautions, but the real blame lies elsewhere.

It’s not as if patching is easy. Even in a corporate setting, systems administrators have trouble keeping up with the never-ending flow of software patches. There could easily be dozens per week across all operating systems and applications, and far too often they break things. Microsoft’s Automatic Update feature has automated the process, but that’s the exception. Patching is triage, and administrators are constantly prioritizing it along with everything else they’re doing.

It’s the system that’s broken. There’s no other industry where shoddy products are sold to a public that expects regular problems, and where consumers are the ones who have to learn how to fix them. If an automobile manufacturer has a problem with a car and issues a recall notice, it’s a rare occurrence and a big deal—and you can take you car in and get it fixed for free. Computers are the only mass-market consumer item that pushes this burden onto the consumer, requiring him to have a high level of technical sophistication just to survive.

It doesn’t have to be this way. It is possible to write quality software. It is possible to sell software products that work properly, and don’t need to be constantly patched. The problem is that it’s expensive and time consuming. Software vendors won’t do it, of course, because the marketplace won’t reward it.

The key to fixing this is software liabilities. Computers are also the only mass-market consumer item where the vendors accept no liability for faults. The reason automobiles are so well designed is that manufacturers face liabilities if they screw up. A lack of software liability is effectively a vast government subsidy of the computer industry. It allows them to produce more products faster, with less concern about safety, security, and quality.

Last summer, the House of Lords Science and Technology Committee issued a report on “Personal Internet Security.” I was invited to give testimony for that report, and one of my recommendations was that software vendors be held liable when they are at fault. Their final report included that recommendation. The government rejected the recommendations in that report last autumn, and last week the committee issued a report on their follow-up inquiry, which still recommends software liabilities.

Good for them.

I’m not implying that liabilities are easy, or that all the liability for security vulnerabilities should fall on the vendor. But the courts are good at partial liability. Any automobile liability suit has many potential responsible parties: the car, the driver, the road, the weather, possibly another driver and another car, and so on. Similarly, a computer failure has several parties who may be partially responsible: the software vendor, the computer vendor, the network vendor, the user, possibly another hacker, and so on. But we’re never going to get there until we start. Software liability is the market force that will incentivise companies to improve their software quality—and everyone’s security.

This essay was previously published in the Guardian:
http://www.guardian.co.uk/technology/2008/jul/17/…

House of Lords documents
http://www.publications.parliament.uk/pa/ld200607/…
http://www.official-documents.gov.uk/document/cm72/…
http://www.publications.parliament.uk/pa/ld200708/…
Liability as a way to fix externalities:
https://www.schneier.com/blog/archives/2007/01/…


Software Liabilities and Free Software

Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.

Don’t worry, they won’t be.

The key to understanding this is that this sort of contractual liability is part of a contract, and with free software—or free anything—there’s no contract. Free software wouldn’t fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law—that would be bad.)

There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.

The insurance industry is key to making this work. Luckily, they’re good at protecting people against liabilities. There’s no reason to think they won’t be able to do it here.


Schneier/BT News

Schneier interviewed by RU Sirius, in April:
http://www.rusiriusradio.com/2007/04/02/…
http://www.10zenmonkeys.com/2007/04/10/…


Congratulations to Our Millionth Terrorist!

The U.S terrorist watch list has hit one million names. I sure hope we’re giving our millionth terrorist a prize of some sort.

Who knew that a million people are terrorists. Why, there are only twice as many burglars in the U.S. And fifteen times more terrorists than arsonists.

Is this idiotic, or what?

Some people are saying fix it, but there seems to be no motivation to do so. I’m sure the career incentives aren’t aligned that way. You probably get promoted by putting people on the list. But taking someone off the list…if you’re wrong, no matter how remote that possibility is, you can probably lose your career. This is why in civilized societies we have a judicial system, to be an impartial arbiter between law enforcement and the accused. But that system doesn’t apply here.

Kafka would be proud.

Okay, so it’s not a million people. Seems to be about 400,000 people, only 5% of Americans. Not that 400,000 terrorists is any less absurd.

“Screening and law enforcement agencies encountered the actual people on the watch list (not false matches) more than 53,000 times from December 2003 to May 2007, according to a Government Accountability Office report last fall.”

Okay, so I have a question. How many of those 53,000 were arrested? Of those who were not, why not? How many have we taken off the list after we’ve investigated them?

http://www.aclu.org/privacy/35968prs20080714.html
http://www.fbi.gov/ucr/cius_04/offenses_reported/…
http://www.fbi.gov/ucr/cius_04/offenses_reported/…
http://www.cnn.com/2008/US/07/16/watch.list/index.html
http://www.propublica.org/article/…
Bob Blakely runs the numbers.
http://notabob.blogspot.com/2008/07/…

Jon Stewart makes fun of the list, too:
http://www.thedailyshow.com/video/index.jhtml?…


TrueCrypt’s Deniable File System

Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.

The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.

“There are several threat models against which a DFS could potentially be secure:

“* One-Time Access. The attacker has a single snapshot of the disk image. An example would be when the secret police seize Alice’s computer.

“* Intermittent Access. The attacker has several snapshots of the disk image, taken at different times. An example would be border guards who make a copy of Alice’s hard drive every time she enters or leaves the country.

“* Regular Access. The attacker has many snapshots of the disk image, taken in short intervals. An example would be if the secret police break into Alice’s apartment every day when she is away, and make a copy of the disk each time.”

Since we wrote our paper, TrueCrypt released version 6.0 of its software, which claims to have addressed many of the issues we’ve uncovered. We did not have time to analyze version 6.0. But, honestly, I wouldn’t trust it.

http://www.schneier.com/paper-truecrypt-dfs.html
http://www.truecrypt.org/docs/?…
http://www.truecrypt.org/docs/?…

Articles:
http://www.darkreading.com/document.asp?…
http://www.pcworld.com/businesscenter/article/…
http://yro.slashdot.org/article.pl?sid=08/07/17/2043248


The DNS Vulnerability

Despite the best efforts of the security community, the details of a critical Internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com—there’s no way for you to tell the difference—and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability—but not the details—and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the Black Hat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

Details of the attack:
http://darkoz.com/?p=15
http://.invisibledenizen.org/2008/07/…
News articles:
http://news.bbc.co.uk/2/hi/technology/7496735.stm
http://www.doxpara.com/?p=1162
http://www.kb.cert.org/vuls/id/800113
http://www.blackhat.com/html/bh-usa-08/…
http://it.slashdot.org/it/08/07/21/2212227.shtml
http://.wired.com/27bstroke6/2008/07/…
http://addxorrol.blogspot.com/2008/07/…
http://.wired.com/27bstroke6/2008/08/…

Patch treadmill:
http://www.schneier.com/crypto-gram-0103.html#1

Assurance:
https://www.schneier.com/blog/archives/2007/08/…

The security mindset:
https://www.schneier.com/blog/archives/2008/03/…

Dan Bernstein’s work:
http://cr.yp.to/djbdns/forgery.html
http://cr.yp.to/djbdns/dnscache.html

This essay previously appeared on Wired.com:
http://www.wired.com/politics/security/commentary/…


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2008 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.