Crypto-Gram

November 15, 2007

by Bruce Schneier
Founder and CTO
BT Counterpane
schneier@schneier.com
http://www.schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0711.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


The War on the Unexpected

We’ve opened up a new front on the war on terror. It’s an attack on the unique, the unorthodox, the unexpected; it’s a war on different. If you act different, you might find yourself investigated, questioned, and even arrested—even if you did nothing wrong, and had no intention of doing anything wrong. The problem is a combination of citizen informants and a CYA attitude among police that results in a knee-jerk escalation of reported threats.

This isn’t the way counterterrorism is supposed to work, but it’s happening everywhere. It’s a result of our relentless campaign to convince ordinary citizens that they’re the front line of terrorism defense. “If you see something, say something” is how the ads read in the New York City subways. “If you suspect something, report it” urges another ad campaign in Manchester, UK. The Michigan State Police have a seven-minute video. Administration officials from then-attorney general John Ashcroft to DHS Secretary Michael Chertoff to President Bush have asked us all to report any suspicious activity.

The problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Even worse: after someone reports a “terrorist threat,” the whole system is biased towards escalation and CYA instead of a more realistic threat assessment.

Watch how it happens. Someone sees something, so he says something. The person he says it to—a policeman, a security guard, a flight attendant—now faces a choice: ignore or escalate. Even though he may believe that it’s a false alarm, it’s not in his best interests to dismiss the threat. If he’s wrong, it’ll cost him his career. But if he escalates, he’ll be praised for “doing his job” and the cost will be borne by others. So he escalates. And the person he escalates to also escalates, in a series of CYA decisions. And before we’re done, innocent people have been arrested, airports have been evacuated, and hundreds of police hours have been wasted.

This story has been repeated endlessly, both in the U.S. and in other countries. Someone—these are all real—notices a funny smell, or some white powder, or two people passing an envelope, or a dark-skinned man leaving boxes at the curb, or a cell phone in an airplane seat; the police cordon off the area, make arrests, and/or evacuate airplanes; and in the end the cause of the alarm is revealed as a pot of Thai chili sauce, or flour, or a utility bill, or an English professor recycling, or a cell phone in an airplane seat.

Of course, by then it’s too late for the authorities to admit that they made a mistake and overreacted, that a sane voice of reason at some level should have prevailed. What follows is the parade of police and elected officials praising each other for doing a great job, and prosecuting the poor victim—the person who was different in the first place—for having the temerity to try to trick them.

For some reason, governments are encouraging this kind of behavior. It’s not just the publicity campaigns asking people to come forward and snitch on their neighbors; they’re asking certain professions to pay particular attention: truckers to watch the highways, students to watch campuses, and scuba instructors to watch their students. The U.S. wanted meter readers and telephone repairmen to snoop around houses. There’s even a new law protecting people who turn in their travel mates based on some undefined “objectively reasonable suspicion,” whatever that is.

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

We need to do two things. The first is to stop urging people to report their fears. People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they’re spooked only squanders our security resources and makes no one safer.

We don’t want people to never report anything. A store clerk’s tip led to the unraveling of a plot to attack Fort Dix last May, and in March an alert Southern California woman foiled a kidnapping by calling the police about a suspicious man carting around a person-sized crate. But these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. In criminal matters, law enforcement is experienced in separating legitimate tips from unsubstantiated fears, and allocating resources accordingly; we should expect no less from them when it comes to terrorism.

Equally important, politicians need to stop praising and promoting the officers who get it wrong. And everyone needs to stop castigating, and prosecuting, the victims just because they embarrassed the police by their innocence.

Causing a city-wide panic over blinking signs, a guy with a pellet gun, or stray backpacks, is not evidence of doing a good job: it’s evidence of squandering police resources. Even worse, it causes its own form of terror, and encourages people to be even more alarmist in the future. We need to spend our resources on things that actually make us safer, not on chasing down and trumpeting every paranoid threat anyone can come up with.

Ad campaigns:
http://www.mta.info/mta/security/index.html
http://www.manchestereveningnews.co.uk/news/s/1000/…
https://www.schneier.com/blog/archives/2007/04/…

Administration comments:
http://www.washingtonpost.com/wp-srv/nation/…
http://www.usatoday.com/news/washington/…
http://query.nytimes.com/gst/fullpage.html?…

Incidents:
http://news.bbc.co.uk/1/hi/northern_ireland/6387857.stm
https://www.schneier.com/blog/archives/2007/09/…
http://www.lineofduty.com/content/view/84004/128/
https://www.schneier.com/blog/archives/2007/05/…
http://www.startribune.com/462/story/826056.html
http://dir.salon.com/story/tech/col/smith/2004/07/…
https://www.schneier.com/blog/archives/2006/10/…
https://www.schneier.com/blog/archives/2007/10/…
http://www.msnbc.msn.com/id/20441775/
http://www.thisisbournemouth.co.uk/…
http://alternet.org/rights/50939/
https://www.schneier.com/blog/archives/2007/04/…
http://www.mercurynews.com/breakingnews/ci_7084101?…
http://www.boston.com/news/globe/city_region/…
http://www.postgazette.com/pg/06081/674773.stm
https://www.schneier.com/blog/archives/2007/04/…

CYA:
https://www.schneier.com/blog/archives/2007/02/…

Public campaigns:
https://www.schneier.com/blog/archives/2005/12/…
http://www.winnipegfirst.ca/article/2007/09/24/…
http://www.underwatertimes.com/print.php?…
http://en.wikipedia.org/wiki/Operation_TIPS

Law protecting tipsters:
http://www.post-gazette.com/pg/07245/813550-37.stm

Successful tips:
http://www.washingtonpost.com/wp-dyn/content/…
http://www.pe.com/localnews/publicsafety/stories/…

This essay originally appeared in Wired.com:
http://www.wired.com/politics/security/commentary/…

Some links didn’t make it into the original article. There’s this creepy “if you see a father holding his child’s hands, call the cops” campaign:
http://www.bloggernews.net/18108
There’s this story of an iPod found on an airplane:
http://forums.worldofwarcraft.com/thread.html?…
There’s this story of an “improvised electronics device” trying to get through airport security:
http://www.makezine.com//archive/2007/09/…
This is a good essay on the “war on electronics.”
http://www.cnet.com/surveillance-state/…


Security Risks of Online Political Contributing

Security researcher Christopher Soghoian gave a presentation last month warning of the potential phishing risk caused by online political donation sites. The Threat Level blog reported:

“The presidential campaigns’ tactic of relying on impulsive giving spurred by controversial news events and hyped-up deadlines, combined with a number of other factors such as inconsistent Web addresses and a muddle of payment mechanisms creates a conducive environment for fraud, says Soghoian.”

And:

“Fraudsters could easily send out e-mails and establish Web sites that mimic the official campaigns’ sites and similarly send out such e-mails that would encourage people to ‘donate’ money without checking for the authenticity of the site.”

He has a point, but it’s not new to online contributions. Fake charities and political organizations have long been problems. When you get a solicitation in the mail for “Concerned Citizens for a More Perfect Country”—insert whatever personal definition you have for “more perfect” and “country”—you don’t know if the money is going to your cause or into someone’s pocket. When you give money on the street to someone soliciting contributions for this cause or that one, you have no idea what will happen to the money at the end of the day.

In the end, contributing money requires trust. While the Internet certainly makes frauds like this easier—anyone can set up a webpage that accepts PayPal and send out a zillion e-mails—it’s nothing new.

http://.wired.com/27bstroke6/2007/10/…
http://www.politicalphishing.com/…


Chemical Plant Security and Externalities

It’s not true that no one worries about terrorists attacking chemical plants, it’s just that our politics seem to leave us unable to deal with the threat.

Toxins such as ammonia, chlorine, propane, and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous. According to the Environmental Protection Agency, there are 7,728 chemical plants in the United States where an act of sabotage—or an accident—could threaten more than 1,000 people. Of those, 106 facilities could threaten more than a million people.

The problem of securing chemical plants against terrorism—or even accidents—is actually simple once you understand the underlying economics. Normally, we leave the security of something up to its owner. The basic idea is that the owner of each chemical plant 1) best understands the risks, and 2) is the one who loses out if security fails. Any outsider—i.e., regulatory agency—is just going to get it wrong. It’s the basic free-market argument, and in most instances it makes a lot of sense.

And chemical plants do have security. They have fences and guards (which might or might not be effective). They have fail-safe mechanisms built into their operations. For example, many large chemical companies use hazardous substances like phosgene, methyl isocyanate and ethylene oxide in their plants, but don’t ship them between locations. They minimize the amounts that are stored as process intermediates. In rare cases of extremely hazardous materials, no significant amounts are stored; instead they are only present in pipes connecting the reactors that make them with the reactors that consume them.

This is all good and right, and what free-market capitalism dictates. The problem is, that isn’t enough.

Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn’t even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that’s the basic idea.

But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.

Sure, the owner could be sued. But he’s not at risk for more than the value of his company, and—in any case—he’d probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation’s chemical plants are secured to a much smaller degree than the risk warrants.

In economics, this is called an *externality*: an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs *to him*.

If we—whether we’re the community living near the chemical plant or the nation as a whole—expect the owner of that plant to spend money for increased security to account for those externalities, we’re going to have to pay for it. And we have three basic ways of doing that. One, we can do it ourselves, stationing government police or military or contractors around the chemical plants. Two, we can pay the owners to do it, subsidizing some sort of security standard.

Or three, we could regulate security and force the companies to pay for it themselves. There’s no free lunch, of course. “We,” as in society, still pay for it in increased prices for whatever the chemical plants are producing, but the cost is paid for by the product’s consumers rather than by taxpayers in general.

Personally, I don’t care very much which method is chosen: that’s politics, not security. But I do know we’ll have to pick one, or some combination of the three. Asking nicely just isn’t going to work. It can’t; not in a free-market economy.

We taxpayers pay for airport security, and not the airlines, because the overall effects of a terrorist attack against an airline are far greater than their effects to the particular airline targeted. We pay for port security because the effects of bringing a large weapon into the country are far greater than the concerns of the port’s owners. And we should pay for chemical plant, train and truck security for exactly the same reasons.

Thankfully, after years of hoping the chemical industry would do it on its own, this April the Department of Homeland Security started regulating chemical plant security. Some complain that the regulations don’t go far enough, but at least it’s a start.

Risks:
http://www.usatoday.com/news/washington/…
http://www.chemsafety.gov/index.cfm?…
http://www.bt.cdc.gov/agent/phosgene/basics/facts.asp
http://www.opencrs.com/document/M20050627/…
http://digital.library.unt.edu/govdocs/crs/…
http://www.washingtonmonthly.com/features/2007/…

Regulations:
http://www.boston.com/news/nation/washington/…
http://www.usatoday.com/printedition/news/20070427/…
This essay previously appeared on Wired.com.
http://www.wired.com/politics/security/commentary/…


News

A handful of prominent security researchers have published a report on the security risks of the large-scale eavesdropping made temporarily legal by the “Protect America Act” passed in the U.S. in August, and which may be made permanently legal soon. “Risking Communications Security: Potential Hazards of the ‘Protect America Act'”—dated October 1, 2007, and marked “draft”—is well worth reading:
http://www.crypto.com/papers/paa-comsec-draft.pdf
https://www.schneier.com/blog/archives/2007/10/…

Hacker extensions for the Firefox web browser.
http://www.darkreading.com/document.asp?doc_id=136029

An excellent three-part series on trends in criminal malware, mostly about Gozi. Malware as service.
http://www.cio.com/article/135500/
http://www.cio.com/article/135550/
http://www.cio.com/article/135551/

Macintosh security:
http://www.macworld.com/2007/10/features/…

Hacking a 911 emergency phone system. There are no details of what the “hacking” was, or whether it was anything more than spoofing the caller ID.
http://seattletimes.nwsource.com/html/localnews/…
http://cwflyris.computerworld.com/t/2216044/…
http://www.msnbc.msn.com/id/21336319/
http://www.ocregister.com/news/…

Fascinating story of insider cheating in online poker:
http://freakonomics.blogs.nytimes.com/2007/10/17/…
http://forumserver.twoplustwo.com/showthreaded.php?…
http://forumserver.twoplustwo.com/showflat.php?…
This graph of players’ river aggression is a great piece of evidence. Note the single outlying point.
http://www.absolutepokercheats.com/500800vpip.GIF

A classified 2006 TSA report on airport security was leaked to USA Today. (Other papers covered the story, but their articles all seem to be derived from the original USA Today article.)
http://www.usatoday.com/printedition/news/20071018/…
http://www.latimes.com/news/local/…
http://www.kutv.com/content/news/watercooler/…
Weirdest news: “At San Diego International Airport, tests are run by passengers whom local TSA managers ask to carry a fake bomb, said screener Cris Soulia, an official in a screeners union.” Someone please tell me this doesn’t actually happen. “Hi Mr. Passenger. I’m a TSA manager. You know I’m not lying to you because of this official-looking laminated badge I have. We need you to help us test airport security. Here’s a ‘fake’ bomb that we’d like you to carry through security in your luggage. Another TSA manager will, um, meet you at your destination. Give the fake bomb to him when you land. And, by the way, what’s your mother’s maiden name?” How in the world is this a good idea? And how hard is it to dress real TSA managers up like vacationers?

TSA claims that this doesn’t happen:
http://www.tsa.gov/approach/mythbusters/fake_bomb.shtm
Here’s someone who said that it did, at Dulles Airport:
http://www.flyertalk.com/forum/showthread.php?t=737223

“Conceptual Terrorists Encase Sears Tower In Jell-O”
http://www.theonion.com/content/news/…
Hiding data behind attorney-client privilege:
http://denver.bizjournals.com/denver/stories/2007/…
Gregory Engel has some good comments about this:
http://weblog.javazen.com/?p=528
This talk from Defcon this year is related.
http://video.google.com/videoplay?…

Detecting restaurant credit card fraud with checksums.
http://www.punny.org/money/…
I don’t know how common tip fraud is. This thread implies that it’s pretty common, but I use my credit card in restaurants all the time all over the world and I’ve never been the victim of this sort of fraud. On the other hand, I’m not a lousy tipper. And maybe I don’t frequent the right sort of restaurants.
http://www.fatwallet.com/t/52/771939/

Declan McCullagh on the politicization of security:
http://www.news.com/8301-13578_3-9795316-38.html

Urban camouflage: I want to be able to disguise myself as a Japanese vending machine.
http://www.nytimes.com/2007/10/20/world/asia/…
http://www.treehugger.com/files/2007/10/…

I made my own hollow book as a kid. These are much nicer. You can even order your hollow book by topic, the better to blend it into the rest of your library.
http://www.secretstoragebooks.com/

Terrorist insects: Yet another movie-plot threat to worry about.
http://www.boston.com/news/globe/ideas/articles/…

A school in the UK is using RFID chips in school uniforms to track attendance. So now it’s easy to cut class; just ask someone to carry your shirt around the building while you’re elsewhere.
http://www.theregister.co.uk/2007/10/22/…

Brandon Mayfield, the Oregon man who was arrested because his fingerprint “matched” that of an Algerian who handled one of the Madrid bombs, now has a legacy: a judge has ruled partial prints cannot be used in a murder case.
http://www.baltimoresun.com/news/local/…
World Series ticket website hacked? Maybe. Certainly scalpers have an incentive to attack this system.
https://www.schneier.com/blog/archives/2007/10/…

So, this pedophile posts photos of himself with young boys, but obscures his face with the Photoshop “twirl” tool. Turns out that the transformation isn’t lossy, and that you can untwirl his face. He was caught in Thailand. Moral: Don’t blindly trust technology; you need to really know what it’s doing.
http://www.boingboing.net/2007/10/08/…
http://www.reuters.com/article/topNews/…

The Russian company Elcomsoft ported password cracking software to a graphics card, boosting speed by 25 times. Why is this news?
http://technology.newscientist.com/article/…
http://s.techrepublic.com.com/tech-news/?…
The Utah company AccessData has been doing this sort of thing much longer, and has much better technology.
http://www.schneier.com/essay-148.html

Dilbert on profiling:
http://www.dilbert.com/comics/dilbert/archive/…
http://www.dilbert.com/comics/dilbert/archive/…

In a stupid terrorism overreaction, Pennsylvania state officials decided not to publicize the list of polling places.
http://www.foxnews.com/story/0,2933,305537,00.html
A few days later, the governor rescinded the order.
http://www.usatoday.com/news/politics/election2008/…
A specialized printer used to print Missouri driver’s licenses was stolen and recovered. It’s a funny story, actually. Turns out the thief couldn’t get access to the software needed to run the printer; a lockout on the control computer apparently thwarted him. When he called tech support, they tipped off the Secret Service. On the one hand, this probably won’t deter a more sophisticated thief. On the other hand, you can make pretty good forgeries with off-the-shelf equipment.
http://www.news.com/8301-10784_3-9803114-7.html

AT&T has a programming language for wholesale surveillance and data mining:
http://.wired.com/27bstroke6/2007/10/…
http://www.freedom-to-tinker.com/?p=1219

The House of Lords on the airplane liquid ban: “We continuously monitor the effectiveness of, in particular, the liquid security measures…” How? “The fact that there has not been a serious incident involving liquid explosives indicates, I would have thought, that the measures that we have put in place so far have been very effective.”
http://www.theregister.co.uk/2007/10/30/…

Architecture and anti-terrorist paranoia:
http://asla.org/awards/2007/studentawards/393.html
http://www.asymmetry.org/2007/10/11/insecurity/

Spammers using porn to break captchas:
http://news.bbc.co.uk/1/hi/technology/7067962.stm
I’ve been saying that spammers would start doing this for years. I’m actually surprised it took this long.

Good essay on the no-joke zone at airports:
http://www.stuff.co.nz/4256682a1861.html

Someone arrested as a homicide suspect walked out of jail after identifying himself as someone else. The biometric system worked, but human error overrode it. It’s a neat scam. Find out someone else who’s been arrested, have a friend come and post bail for that person, and then steal his identity when the jailers come into the cellblock.
http://www.cbsnews.com/stories/2007/10/29/national/…

Synthetic identity theft is poised to become a bigger problem than regular identity theft:
http://online.wsj.com/article/SB119362045526074445.html
http://biz.yahoo.com/brn/070516/21861.html?.v=1

Interesting GAO testimony/report: “Internet Infrastructure: Challenges in Developing a Public/Private Recovery Plan,” Gregory C. Wilshusen, Director, Information Security Issues, Government Accountability Office (GAO), October 23, 2007.
http://www.gao.gov/new.items/d08212t.pdf

Mad at someone? Turn him in as a terrorist:
http://news.yahoo.com/s/afp/20071102/od_afp/…
Businesses do this too: “In May 2005 Jet’s application for a licence to fly to America was held up after a firm based in Maryland, also called Jet Airways, accused Mr Goyal’s company of being a money-laundering outfit for al-Qaeda. Mr Goyal says some of his local competitors were behind the claim, which was later withdrawn.”
http://www.economist.com/people/displaystory.cfm?…

This denial-of-service attack against electronic car locks was accidental, but it could certainly be done on purpose.
http://news.bbc.co.uk/1/hi/england/kent/7073935.stm

Interesting identity theft study. (It’s long, but at least read the executive summary.)
http://www.utica.edu/academic/institutes/cimip/…
http://www.siliconvalley.com/security/ci_7248917

GSMK CryptoPhone G10i: open source, and it uses Twofish.
http://www.cryptophone.de/products/CPG10i/index.html

A Salesforce.com data breach results in targeted phishing attacks:
http://it.slashdot.org/article.pl?sid=07/11/06/216228

This is a very moving story about a foreign tourist being removed from a train for taking pictures:
http://www.episcopalcafe.com/daily/war_and_peace/…
A response from the writer of the original article, after people questioned the veracity of the story:
https://www.schneier.com/blog/archives/2007/11/…
An Al Qaeda hacker attack was supposed to begin last Sunday. I noticed nothing.
http://www.debka.com/headline.php?hid=4723

Funny security cartoon from “The New Yorker”:
http://www.cartoonbank.com/product_details.asp?…

Suicide attacks in the computer game Halo 3:
http://www.wired.com/gaming/gamingreviews/…
Computer security consultant admits to running a botnet:
http://www.iht.com/articles/ap/2007/11/10/america/…
http://.washingtonpost.com/securityfix/2007/11/…
High-school football prank provokes terrorism fears:
https://www.schneier.com/blog/archives/2007/11/…

Sensible comments from the Canadian privacy commissioner on the no-fly list:
http://www.canada.com/edmontonjournal/news/…
Malcolm Gladwell makes a convincing case that criminal profiling is nothing more than a “cold reading” magic trick.
https://www.schneier.com/blog/archives/2007/11/…

Donald Kerr, the principal deputy director of national intelligence, made some very dangerous comments about redefining privacy. The press reported only the most inflammatory comments:
http://www.cnn.com/2007/POLITICS/11/11/…
https://www.schneier.com/blog/archives/2007/11/…
His actual comments are more nuanced:
http://www.odni.gov/speeches/20071023_speech.pdf
Other comments:
http://digbysblog.blogspot.com/2007/11/…
http://www.crooksandliars.com/2007/11/13/…
http://www.cs.columbia.edu/~smb//2007-11/…
Me on the value of privacy:
https://www.schneier.com/blog/archives/2006/05/…

Hushmail turns encrypted e-mail over to the government:
http://.wired.com/27bstroke6/2007/11/…

The overblown threat of suitcase nukes:
http://hosted.ap.org/dynamic/stories/T/…


Switzerland Protects its Vote with Quantum Cryptography

This is so silly I wasn’t going to even bother blogging about it. But the sheer number of news stories has made me change my mind.

Basically, the Swiss company ID Quantique convinced the Swiss government to use quantum cryptography to protect vote transmissions during their October 21 election. It was a great publicity stunt, and the news articles were filled with hyperbole: how the “unbreakable” encryption will ensure the integrity of the election, how this will protect the election against hacking, and so on.

Complete idiocy. There are many serious security threats to voting systems, especially paperless touch-screen voting systems, but they’re not centered around the transmission of votes from the voting site to the central tabulating office. The software in the voting machines themselves is a much bigger threat, one that quantum cryptography doesn’t solve in the least.

Moving data from point A to point B securely is one of the easiest security problems we have. Conventional encryption works great. PGP, SSL, SSH could all be used to solve this problem, as could pretty much any good VPN software package; there’s no need to use quantum crypto for this at all. Software security, OS security, network security, and user security are much harder security problems; and quantum crypto doesn’t even begin to address them.

So, congratulations to ID Quantique for a nice publicity stunt. But did they actually increase the security of the Swiss election? Doubtful.

http://www.economist.com/displaystory.cfm?…
http://www.itwire.com/content/view/14833/53/
http://www.networkworld.com/news/2007/…
http://www.smh.com.au/news/World/…
http://feeds.arstechnica.com/~r/arstechnica/BAaf/~3/…
http://cwflyris.computerworld.com/t/2191514/92085/…
http://technology.newscientist.com/article/…
Me on quantum cryptography:
http://www.schneier.com/crypto-gram-0312.html#6

Me on voting:
http://www.schneier.com/crypto-gram-0411.html#1
http://www.schneier.com/crypto-gram-0411.html#2
http://www.schneier.com/crypto-gram-0312.html#9
http://www.schneier.com/crypto-gram-0012.html#1


Security by Letterhead

Worsethanfailure.com has an amusing story about someone trying to get something done via phone tech support. The person at the other end of the phone line needs a written request on “company letterhead,” which—after an argument—is provided by fax.

Ha ha. The idiot ISP guy doesn’t realize how easy it for anyone with a word processor and a laser printer to fake a letterhead. But what this story really shows is how hard it is for people to change their security intuition. Security-by-letterhead was fairly robust when printing was hard, and faking a letterhead was real work. Today it’s easy, but people—especially people who grew up under the older paradigm—don’t act as if it is. They would if they thought about it, but most of the time our security runs on intuition and not on explicit thought.

This kind of thing bites us all the time. Mother’s maiden name is no longer a good password. An impressive-looking storefront on the Internet is not the same as an impressive-looking storefront in the real world. The headers on an e-mail are not a good authenticator of its origin. It’s an effect of technology moving faster than our ability to develop a good intuition about that technology.

And, as technology changes ever increasingly faster, this will only get worse.

http://worsethanfailure.com/Articles/…


Schneier/BT Counterpane News

Schneier is speaking at the ISMS Forum on November 20 in Madrid:
https://www.ismsforum.es/index.php

Schneier is speaking at the ISF Annual World Congress on December 10 in Capetown:
http://www.securityforum.org/html/congres.htm

I spoke at the EDUCAUSE conference this year in Seattle. There’s a podcast and video of my talk available (“Ten Trends of Information Security”; I’ve given the talk before) as well as a podcast of an interview with me.
http://www.educause.edu/E07/Program/11073?…
http://connect.educause.edu//mpasiewicz/…

My blog, “Schneier on Security,” has been listed as the Illuminated Site of the Week by Daily Illuminator, run by Steve Jackson Games.
http://www.sjgames.com/ill/archives.html?…


Cyberwar: Myth or Reality?

The biggest problems in discussing cyberwar are the definitions. The things most often described as cyberwar are really cyberterrorism, and the things most often described as cyberterrorism are more like cybercrime, cybervandalism or cyberhooliganism—or maybe cyberespionage.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime and vandalism are old concepts. What’s new is the domain; it’s the same old stuff occurring in a new arena. But because cyberspace is different, there are differences worth considering.

Of course, the terms overlap. Although the goals are different, many tactics used by armies, terrorists and criminals are the same. Just as they use guns and bombs, they can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime or even—if done by some 14-year-old who doesn’t really understand what he’s doing—cyberhooliganism. Which it is depends on the attacker’s motivations and the surrounding circumstances—just as in the real world.

For it to be cyberwar, it must first be war. In the 21st century, war will inevitably include cyberwar. Just as war moved into the air with the development of kites, balloons, and aircraft, and into space with satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

I have no doubt that smarter and better-funded militaries are planning for cyberwar. They have Internet attack tools: denial-of-service tools; exploits that would allow military intelligence to penetrate military systems; viruses and worms similar to what we see now, but perhaps country- or network-specific; and Trojans that eavesdrop on networks, disrupt operations, or allow an attacker to penetrate other networks. I believe militaries know of vulnerabilities in operating systems, generic or custom military applications, and code to exploit those vulnerabilities. It would be irresponsible for them not to.

The most obvious attack is the disabling of large parts of the Internet, although in the absence of global war, I doubt a military would do so; the Internet is too useful an asset and too large a part of the world economy. More interesting is whether militaries would disable national pieces of it. For a surgical approach, we can imagine a cyberattack against a military headquarters, or networks handling logistical information.

Destruction is the last thing a military wants to accomplish with a communications network. A military only wants to shut down an enemy’s network if it isn’t acquiring useful information. The best thing is to infiltrate enemy computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, perform traffic analysis: analyze the characteristics of communications. Only if a military can’t do any of this would it consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh the advantages of eavesdropping on it.

Cyberwar is certainly not a myth. But you haven’t seen it yet, despite the attacks on Estonia. Cyberwar is warfare in cyberspace. And warfare involves massive death and destruction. When you see it, you’ll know it.

This is the second half of a point/counterpoint with Marcus Ranum; it appeared in the November issue of “Information Security Magazine.” You can read Marcus’s half here:
http://searchsecurity.techtarget.com/…
Longer essay of mine on cyberwar:
https://www.schneier.com/blog/archives/2007/06/…


Understanding the Black Market in Internet Crime

Here’s an interesting paper from Carnegie Mellon University: “An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants.”

The paper focuses on the large illicit market that specializes in the commoditization of activities in support of Internet-based crime. The main goal of the paper was to understand and measure how these markets function, and discuss the incentives of the various market entities. Using a dataset collected over seven months and comprising over 13 million messages, they were able to categorize the market’s participants, the goods and services advertised, and the asking prices for selected interesting goods.

Really cool stuff.

Unfortunately, the data is extremely noisy and so far the authors have no way to cross-validate it, so it is difficult to make any strong conclusions.

The press focused on just one thing: a discussion of general ways to disrupt the market. Contrary to the claims of the article, the authors have not built any tools to disrupt the markets.

http://sparrow.ece.cmu.edu/group/pub/…
Press:
http://arstechnica.com/news.ars/post/…
http://www.cmu.edu/news/archive/2007/October/…
Related blog posts:
https://www.schneier.com/blog/archives/2007/10/…
https://www.schneier.com/blog/archives/2007/10/…


The Strange Story of Dual_EC_DRBG

Random numbers are critical for cryptography: for encryption keys, random authentication challenges, initialization vectors, nonces, key agreement schemes, generating prime numbers, and so on. Break the random number generator, and most of the time you break the entire security system. Which is why you should worry about a new random number standard that includes an algorithm that is slow, badly designed, and just might contain a backdoor for the NSA.

Generating random numbers isn’t easy, and researchers have discovered lots of problems and attacks over the years. A recent paper found a flaw in the Windows 2000 random number generator; another paper found flaws in the Linux random number generator. Back in 1996, an early version of SSL was broken because of flaws in its random number generator. In 1999, I co-authored (with John Kelsey and Ferguson) Yarrow, a random number generator based on our own cryptanalysis work. I improved this design four years later—and renamed it Fortuna—in the book “Practical Cryptography,” which I co-authored with Ferguson.

This year, the U.S. government released a new official standard for random number generators, which will likely be followed by software and hardware developers around the world. Called NIST Special Publication 800-90, the 130-page document contains four different approved techniques, called DRBGs, or “Deterministic Random Bit Generators.” All four are based on existing cryptographic primitives. One is based on hash functions, one on HMAC, one on block ciphers, and one on elliptic curves. It’s smart cryptographic design to use only a few well-trusted cryptographic primitives, so building a random number generator out of existing parts is a good thing.

But one of those generators—the one based on elliptic curves—is not like the others. Called Dual_EC_DRBG, not only is it a mouthful to say, it’s also three orders of magnitude slower than its peers. It’s in the standard only because it’s been championed by the NSA, which first proposed it years ago in a related standardization project at the American National Standards Institute.

The NSA has always been intimately involved in U.S. cryptography standards—it is, after all, expert in making and breaking secret codes. So the agency’s participation in the NIST standard is not sinister in itself. It’s only when you look under the hood at the NSA’s contribution that questions arise.

Problems with Dual_EC_DBRG were first described in early 2006. The math is complicated, but the general point is that the random numbers it produces have a small bias. The problem isn’t large enough to make the algorithm unusable—and Appendix E of the NIST standard describes an optional workaround to avoid the issue—but it’s cause for concern. Cryptographers are a conservative bunch; we don’t like to use algorithms that have even a whiff of a problem.

But today there’s an even bigger stink brewing around Dual_EC_DRBG. In an informal presentation at the CRYPTO 2007 conference this past August, Dan Shumow and Niels Ferguson showed that the algorithm contains a weakness that can only be described as a backdoor.

This is how it works: There are a bunch of constants—fixed numbers—in the standard used to define the algorithm’s elliptic curve. These constants are listed in Appendix A of the NIST publication, but nowhere is it explained where they came from.

What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

The researchers don’t know what the secret numbers are. But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.

Of course, we have no way of knowing whether the NSA knows the secret numbers that break Dual_EC-DRBG. We have no way of knowing whether an NSA employee, working on his own came up with the constants, and has the secret numbers. We don’t know if someone from NIST, or someone in the ANSI working group, has them. Maybe nobody does.

We don’t know where the constants came from in the first place; we only know that whoever came up with them could have the key to this backdoor. And we know there’s no way for NIST—or anyone else—to prove otherwise.

This is scary stuff indeed.

Even if no one knows the secret numbers, the fact that the backdoor is present makes Dual_EC_DRBG very fragile. If someone were to solve just one instance of the algorithm’s elliptic curve problem, he would effectively have the keys to the kingdom. He could then use it for whatever nefarious purpose he wanted. Or he could publish his result, and render every implementation of the random number generator completely insecure.

It’s possible to implement Dual_EC_DRBG in such a way as to protect it against this backdoor, by generating new constants with another secure random number generator and then publishing the seed. This method is even in the NIST document, in Appendix A. But the procedure is optional, and my guess is that most implementations of the Dual_EC_DRBG won’t bother.

If this story leaves you confused, join the club. I don’t understand why the NSA was so insistent about including Dual_EC-DRBG in the standard. It makes no sense as a trap door: it’s public, and rather obvious. It makes no sense from an engineering perspective: it’s too slow for anyone to willingly use it. And it makes no sense from a backwards compatibility perspective: swapping one random number generator for another is easy.

My recommendation, if you’re in need of a random number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG. Or Fortuna or Yarrow, for that matter.

In the meantime, both NIST and the NSA have some explaining to do.

RNG Flaws:

http://www.cs.virginia.edu/~rjg7v/annotated.html
http://eprint.iacr.org/2007/419
http://eprint.iacr.org/2006/086.pdf
http://www.ddj.com/windows/184409807
http://www.schneier.com/paper-prngs.html

Yarrow:
http://www.schneier.com/yarrow.html

NIST SP 800-90:
http://csrc.nist.gov/publications/nistpubs/800-90/…

Dual_EC_DRBG problems:
http://eprint.iacr.org/2006/190
http://eprint.iacr.org/2007/048

Shumow-Ferguson presentation:
http://rump2007.cr.yp.to/15-shumow.pdf


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.

Copyright (c) 2007 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.