Crypto-Gram

May 15, 2007

by Bruce Schneier
Founder and CTO
BT Counterpane
schneier@schneier.com
http://www.schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0705.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


A Security Market for Lemons

More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.

Encryption is the obvious solution for this problem—I use PGPdisk—but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.

Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There’s no data self-destruct feature. The password protection can easily be bypassed. The data isn’t even encrypted. As a secure storage device, Secustick is pretty useless.

On the surface, this is just another snake-oil security story. But there’s a deeper question: Why are there so many bad security products out there? It’s not just that designing good security is hard—although it is—and it’s not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?

In 1970, American economist George Akerlof wrote a paper called “The Market for ‘Lemons,'” which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.

Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can’t tell the difference—at least until he’s made his purchase. I’ll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.

This means that the best cars don’t get sold; their prices are too high. Which means that the owners of these best cars don’t put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.

In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.

The computer security market has a lot of the same characteristics of Akerlof’s lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives—Kingston Technology sent me one in the mail a few days ago—but even I couldn’t tell you if Kingston’s offering is better than Secustick. Or if it’s better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can’t tell the difference, most consumers won’t be able to either.

Of course, it’s more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.

I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that “won” weren’t the most secure firewalls; they were the ones that were easy to set up, easy to use and didn’t annoy users too much. Because buyers couldn’t base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren’t the most secure, because buyers couldn’t tell the difference.

How do you solve this? You need what economists call a “signal,” a way for buyers to tell the difference. Warranties are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.

Secustick, for one, seems to have been withdrawn from sale.

But security testing is both expensive and slow, and it just isn’t possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product—a firewall, an IDS—is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.

In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.

All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers’ Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus “number of signatures” comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.

With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don’t have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.

Risks of data in small packages:
http://www.wired.com/politics/security/commentary/…
Secustick and review:
http://www.secustick.nl/engels/index.html
http://tweakers.net/reviews/683

Snake oil:
http://www.schneier.com/crypto-gram-9902.html#snakeoil
http://www.schneier.com/…

“A Market for Lemons”:
http://en.wikipedia.org/wiki/The_Market_for_Lemons
http://www.students.yorku.ca/~siccardi/…

Kingston USB drive:
http://www.kingston.com/flash/dt_secure.asp

Slashdot thread:
http://it.slashdot.org/article.pl?sid=07/04/19/140245

This essay originally appeared in Wired.
http://www.wired.com/politics/security/commentary/…


Is Big Brother a Big Deal?

Big Brother isn’t what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today’s information society looks nothing like Orwell’s world, and watching and intimidating a population today isn’t anything like what Winston Smith experienced.

Data collection in “1984” was deliberate; today’s is inadvertent. In the information society, we generate data naturally. In Orwell’s world, people were naturally anonymous; today, we leave digital footprints everywhere.

“1984”‘s police state was centralized; today’s is decentralized. Your phone company knows who you talk to, your credit card company knows where you shop and Netflix knows what you watch. Your ISP can read your email, your cell phone can track your movements and your supermarket can monitor your purchasing patterns. There’s no single government entity bringing this together, but there doesn’t have to be. As Neal Stephenson said, the threat is no longer Big Brother, but instead thousands of Little Brothers.

“1984”‘s Big Brother was run by the state; today’s Big Brother is market driven. Data brokers like ChoicePoint and credit bureaus like Experian aren’t trying to build a police state; they’re just trying to turn a profit. Of course these companies will take advantage of a national ID; they’d be stupid not to. And the correlations, data mining and precise categorizing they can do is why the U.S. government buys commercial data from them.

“1984”-style police states required lots of people. East Germany employed one informant for every 66 citizens. Today, there’s no reason to have anyone watch anyone else; computers can do the work of people.

“1984”-style police states were expensive. Today, data storage is constantly getting cheaper. If some data is too expensive to save today, it’ll be affordable in a few years.

And finally, the police state of “1984” was deliberately constructed, while today’s is naturally emergent. There’s no reason to postulate a malicious police force and a government trying to subvert our freedoms. Computerized processes naturally throw off personalized data; companies save it for marketing purposes, and even the most well-intentioned law enforcement agency will make use of it.

Of course, Orwell’s Big Brother had a ruthless efficiency that’s hard to imagine in a government today. But that completely misses the point. A sloppy and inefficient police state is no reason to cheer; watch the movie “Brazil” and see how scary it can be. You can also see hints of what it might look like in our completely dysfunctional “no-fly” list and useless projects to secretly categorize people according to potential terrorist risk. Police states are inherently inefficient. There’s no reason to assume today’s will be any more effective.

The fear isn’t an Orwellian government deliberately creating the ultimate totalitarian state, although with the U.S.’s programs of phone-record surveillance, illegal wiretapping, massive data mining, a national ID card no one wants and Patriot Act abuses, one can make that case. It’s that we’re doing it ourselves, as a natural byproduct of the information society. We’re building the computer infrastructure that makes it easy for governments, corporations, criminal organizations and even teenage hackers to record everything we do, and—yes—even change our votes. And we will continue to do so unless we pass laws regulating the creation, use, protection, resale, and disposal of personal data. It’s precisely the attitude that trivializes the problem that creates it.

This essay appeared in the May issue of “Information Security,” as the second half of a point/counterpoint with Marcus Ranum.
http://informationsecurity.techtarget.com/magItem/…
Marcus’s half:
http://www.ranum.com/security/computer_security/…


Citizen-Counterterrorist Training Video

The seven signs of terrorist activity, according to a Michigan State Police training video:

Surveillance
Elicitation
Tests of security
Acquiring supplies
Suspicious people who “don’t belong”
Dry runs/trial runs
Deploying assets or getting into position

I especially like the scenes of concerned citizens calling the police. Anyone care to guess what the false alarm rate would be if everyone started making phone calls like this?

http://www.hanford.gov/oci/video/7signsofterrorism.wmv


News

The DHS no longer has a failing cybersecurity grade; they got a D. The rest of the U.S. government didn’t do very well. Eight of twenty-four departments (including the Department of Defense) failed. Overall, the federal government received a C- (up from a D+ last year).
http://news.zdnet.com/2100-1009_22-6175666.html
http://www.computerworld.com/action/article.do?…
Terror-fighting dolphins and sea lions patrol for underwater swimmers;
http://www.forbes.com/feeds/ap/2007/04/14/…

Yet another Boston terrorism overreaction, this one involving backpacks hanging in the trees near schools. Are these people trying to be stupid? Terrorism used to be hard. Now all you have to is hang backpacks from trees near schools.
http://news.bostonherald.com/localRegional/view.bg?…
Refuse to be terrorized, people!
http://www.schneier.com/essay-124.html

There’s not just one watch list in the US, but many.
http://www.wired.com/politics/onlinerights/news/…

Foiling bank robbers with kindness: it seems to work really well. What I like about this security system is that it fails really well in the event of a false alarm. There’s nothing wrong with being extra nice to a legitimate customer.
http://www.eyewitnessnewstv.com/global/story.asp?…
Arresting children: a disturbing trend. These are not the sorts of matters the police should be getting involved in. The police aren’t trained to handle children this age, and children this age don’t benefit by being fingerprinted and thrown in jail.
http://welcome-to-pottersville.blogspot.com/2007/04/…
A new development from surveillance-camera-happy England: cameras that “predict” crimes. This moves us further along the continuum into thoughtcrimes, but near as I can tell, the system just collects evidence on people it thinks suspicious, just in case. Assuming the data is erased immediately after, it’s much less invasive than actually accosting someone for thoughtcrime; the costs for false alarms is minimal. I doubt it works nearly as well as the article claims, but that’s likely to change in 5 to 10 years. For example, there’s a lot of research being done in the area of microfacial expressions to detect lying and other thoughts. This is the sort of technological advance that we need to be talking about in terms of security, privacy, and liberty.
http://www.timesonline.co.uk/tol/news/uk/crime/…

Here’s a technology that uses keystroke biometrics to help detect if someone else is typing in your password. I think this is a good idea. I wouldn’t want to automatically block users unless they get this right, and the false-positive/false-negative ratio would have to be jiggered properly, but if they can get it working right, it’s an extra layer of authentication for “free.”
http://www.biopassword.com/
http://technology.timesonline.co.uk/tol/news/…
Hacking the U.S. Post Office: fooling them to send mail to “forbidden” countries:
http://englishrussia.com/?p=334#more-334

Watch the video of how the Australian authorities react when someone—dressed either as an American or Arab tourist—films the Sydney Harbor Bridge and a nuclear reactor. The synopsis: The Arab is intercepted within three minutes both times, while the U.S. tourist is given instructions on how to get inside the nuclear facility. Moral for terrorists: dress like an American. (By the way, Lucas Heights is a research reactor. It produces medical isotopes and performs research, and doesn’t produce power.)
http://youtube.com/watch?v=McB9tsabPn0

According to the Internet Crime Complaint Center and reported in “U.S. News and World Report,” auction fraud and non-delivery of items purchased are far and away the most common Internet crimes. Identity theft is way down near the bottom. “The feds caution that these figures don’t represent a scientific sample of just how much Net crime is out there. They note, for example, that the high number of auction fraud complaints is due, in part, to eBay and other big E-commerce outfits offering customers direct links to the IC3 website. And it’s tough to measure what may be the Web’s biggest scourge, child porn, simply by complaints. Still, the survey is a useful snapshot, even if it tells us what we already know: that the Internet, like the rest of life, is full of bad guys. Caveat emptor.”
http://www.usnews.com/usnews/news/badguys/070416/…
In the aftermath if the Virginia Tech shootings, Yale tried to ban the use of stage weapons on stage. I wish I could make a joke about security theater at the theater, but this is just basic stupidity. Not only does this not make anyone safer, it doesn’t even make anyone feel safer.
http://www.yaledailynews.com/articles/view/20843
The order was quickly rescinded, without any demonstration of common sense:
http://yaledailynews.com/articles/view/20913

An interesting rant from a cop. Summary: people use policemen as props in their personal disputes.
http://syracuse.craigslist.org/about/best/lax/…
If the police implement programs to let ordinary citizens report suspected terrorists, this is the kind of thing that will result.

English professor reported for recycling paper while looking Middle Eastern:
http://alternet.org/rights/50939/

Triggering bombs by remote key entry devices:
https://www.schneier.com/blog/archives/2007/04/…

Commentary on Vista security and the Microsoft monopoly:
https://www.schneier.com/blog/archives/2007/04/…

Richard Clarke on the “puppy dog” theory of terrorism:
http://www.nydailynews.com/opinions/2007/04/25/…
“Get Fuzzy” is one of my favorite comic strips. A recent one was about security.
http://www.comics.com//comics/getfuzzy/archive/…

If you want your security technology to be considered for the 2012 London Olympics, you have to be a major sponsor of the event. I have repeatedly said that security is generally only part of a larger context, but this borders on ridiculous.
http://www.itpro.co.uk/s/editorial-blogs/…
In East Belfast, burglars called in a bomb threat. Residents evacuated their homes, and then the burglars proceeded to rob eight empty houses on the block. I’ve written about this sort of thing before: sometimes security procedures themselves can be exploited by attackers. It was Step 4 of my “five-step process” from “Beyond Fear”(pages 14-15). A national ID card makes identity theft more lucrative; forcing people to remove their laptops at airport security checkpoints makes laptop theft more common. Moral: you can’t just focus on one threat. You need to look at the broad spectrum of threats, and pay attention to how security against one affects the others.
http://news.bbc.co.uk/1/hi/northern_ireland/6580873.stm

Clever Google ad hack:
http://.washingtonpost.com/securityfix/2007/04/…
There’s a class-action lawsuit against TJX by various banks and banking groups, arguing that TJX failed to protect customer data with adequate security measures and was less than honest about how it handled data. This case could break new legal ground, and is worth watching closely. (I’m rooting for the plaintiff.)
http://searchsecurity.techtarget.com/…
More details on the theft:
http://online.wsj.com/article_email/…
http://wifinetnews.com/archives/007604.html

Encrypted phones are big business in Italy as a defense against wiretapping:
http://www.nytimes.com/2007/04/30/business/…
Here’s a taser disguised as a tampon. Real or hoax?
http://www.americaninventorspot.com/security_system

Security arms races in duck oviducts and phalluses; interesting research from Yale:
http://www.nytimes.com/2007/05/01/science/…
Project Honey Pot files a $1B+ lawsuit against spammers.
http://www.projecthoneypot.org/5days_thursday.php

We all know that CRT displays radiate like mad, and someone with the right equipment can read them at a distance. Marcus Kuhn demonstrates how to do the same thing with LCD displays.
http://www.newscientist.com//technology/2007/04/…
Older research along these lines:
http://unix.be.eu.org/docs-free/tempest/…

UK police blow up a bat detector, thinking it’s a bomb. For those who don’t know, the A23 is the main road between London and Brighton on the south coast.
http://www.theargus.co.uk/misc/print.php?artid=1372149
http://www.theregister.co.uk/2007/05/04/…
http://news.bbc.co.uk/1/hi/england/sussex/6618737.stm
I like this comment: “We are working on ways to improve identification of our property to avoid a repeat of the incident.” Might I suggest a sign: “This is not a bomb.”

Another xkcd cartoon: on cryptography:
http://xkcd.com/c257.html

New Trojan mimics Windows activation interface.
http://www.pcmag.com/article2/0,1895,2126214,00.asp
http://www.symantec.com/security_response/…
U.S./Canadian dispute over border crossing procedures.
http://www.ctv.ca/servlet/ArticleNews/story/CTVNews/…
Two teenage boys detonated a stink bomb on a Sydney commuter train, and prompted a counter-terrorism response. Best quote: “‘It would have been terrifying. You’re on a train, you hear a loud bang, the logical conclusion that people drew was (that it was) probably a terrorist attack,’ Mr Owens told reporters.” I agree that it was the conclusion that people drew, but not that it was a logical conclusion.
http://www.stuff.co.nz/4047150a12.html

Weird lottery hack:
http://www.smh.com.au/articles/2007/05/02/…

University of California’s tips for what to do when there’s a shooter on campus:
http://www.ucpd.ucla.edu/ucpd/zippdf/2007/…
“The Myth of the Superuser,” a very interesting law journal paper by Paul Ohm:
http://papers.ssrn.com/sol3/papers.cfm?…
Here’s a three-part summary of the topic by Ohm:
http://www.volokh.com/archives/…
http://volokh.com/archives/…
http://volokh.com/archives/…
Clarification by Ohm to the blog post:
https://www.schneier.com/blog/archives/2007/05/…
The researcher claims this is “the first remotely exploitable SCADA security vulnerability,” and I think that’s correct. In general, I think the threat of SCADA-based attacks are overblown today, but will become more serious in the coming years.
http://www.physorg.com/news94025004.html

Low-tech Tamil Tiger guerillas ground high-tech Sri Lankan Air Force:
http://www.theaustralian.news.com.au/story/…
http://www.gulf-times.com/site/topics/article.asp?…
Remember the weird story about radio transmitters found in Canadian coins in order to spy on Americans? Complete nonsense.
https://www.schneier.com/blog/archives/2007/05/…

Sometimes, that strange backpack *is* a bomb. Not very often, but once in a great while. Still, I don’t think it’s possible to solve this by preemptively assuming that all strange objects are potential bombs. There are just too many strange objects in the world.
http://www.cnn.com/2007/US/05/07/…
Blog entry URL:
https://www.schneier.com/blog/archives/2007/05/…

Singapore is setting up a $98M research center for quantum computation. Great news, but what in the world does this quote mean? “The kind of quantum cryptography we develop here is probably the most sophisticated that is not available in any other countries so we have some ideas to make it so secure that you don’t even have to trust equipment that you could buy from a vendor.”
http://www.channelnewsasia.com/stories/…
The most secure car park in the world?
http://en.wikipedia.org/wiki/Bold_Lane

Sex toy security risk: sounds like bullshit—or clever marketing—to me.
http://observer.guardian.co.uk/world/story/…

“Is your PC virus-free? Get it infected here!” An actual Google Adwords campaign.
http://didierstevens.wordpress.com/2007/05/07/…
The Beerbelly attaches to your abdomen and looks like a beer gut, allowing you to smuggle beer past guards—even guards that do cursory pat-down searches.
http://thebeerbelly.com/


Recognizing “Hinky” vs. Citizen Informants

On the subject of people noticing and reporting suspicious actions, I have been espousing two views that some find contradictory. One, we are all safer if police, guards, security screeners, and the like ignore traditional profiling and instead pay attention to people acting hinky: not right. And two, if we encourage people to contact the authorities every time they see something suspicious, we’re going to waste our time chasing false alarms: foreigners whose customs are different, people who are disliked by someone, and so on.

The key difference is expertise. People trained to be alert for something hinky will do much better than any profiler, but people who have no idea what to look for will do no better than random.

Here’s a story that illustrates this: Last week, a student at the Rochester Institute of Technology was arrested with two illegal assault weapons and 320 rounds of ammunition in his dorm room and car:

“The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg’s dorm room. The door was shut, but the worker heard the all-too-familiar racking sound of a weapon, said the center’s director Bill Gunther.”

Notice how expertise made the difference. The “conference center worker” had the right knowledge to recognize the sound and to understood that it was out of place in the environment he heard it. He wasn’t primed to be on the lookout for suspicious people and things; his trained awareness kicked in automatically. He recognized hinky, and he acted on that recognition. A random person simply can’t do that; he won’t recognize hinky when he sees it. He’ll report imams for praying, a neighbor he’s pissed at, or people at random. He’ll see an English professor recycling paper, and report a Middle-Eastern-looking man leaving a box on sidewalk.

We all have some experience with this. Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can’t fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky—in our particular domain of expertise.

Good security people have the knowledge, skill, and experience to do that in security situations. It’s the difference between a good security person and an amateur.

This is why behavioral assessment profiling is a good idea, while the Terrorist Information and Prevention System (TIPS) isn’t. This is why training truckers to look out for suspicious things on the highways is a good idea, while a vague list of things to watch out for isn’t. It’s why an Israeli driver recognized a passenger as a suicide bomber, while an American driver probably wouldn’t.

This kind of thing isn’t easy to train. (Much has been written about it, though; Malcolm Gladwell’s “Blink” discusses this in detail.) You can’t learn it from watching a seven-minute video. But the more we focus on this—the more we stop wasting our airport security resources on screeners who confiscate rocks and snow globes, and instead focus them on well-trained screeners walking through the airport looking for hinky—the more secure we will be.

Hinky:
https://www.schneier.com/blog/archives/2005/07/…

RIT Story:
http://www.nj.com/news/ledger/morris/index.ssf?/…
Casino security and the “Just Doesn’t Look Right (JLDR)” principle:
http://www.casinosurveillancenews.com/jdlr.htm

Commentary:
http://www.cato-at-liberty.org/2007/04/26/…
The blog post has many more links to the specific things mentioned in the essay:
https://www.schneier.com/blog/archives/2007/04/…


More on REAL ID

In March, the Department of Homeland Security released its long-awaited guidance document regarding national implementation of the Real ID program, as part of its post-9/11 national security initiatives. It is perhaps quite telling that despite bipartisan opposition, Real ID was buried in a 2005 “must-pass” military spending bill and enacted into law without public debate or congressional hearings.

DHS has maintained that the Real ID concept is not a national identification database. While it’s true that the system is not a single database per se, this is a semantic dodge; according to the DHS document, Real ID will be a collaborative data-interchange environment built from a series of interlinking systems operated and administered by the states. In other words, to the Department of Homeland Security, it’s not a single database because it’s not a single system. But the functionality of a single database remains intact under the guise of a federated data-interchange environment.

The DHS document notes the “primary benefit of Real ID is to improve the security and lessen the vulnerability of federal buildings, nuclear facilities, and aircraft to terrorist attack.” We know now that vulnerable cockpit doors were the primary security weakness contributing to 9/11, and reinforcing them was a long-overdue protective measure to prevent hijackings. But this still raises an interesting question: Are there really so many members of the American public just “dropping by” to visit a nuclear facility that it’s become a primary reason for creating a national identification system? Are such visitors actually admitted?

DHS proposes guidelines for proving one’s identity and residence when applying for a Real ID card. Yet while the department concedes it’s a monumental task to prove one’s domicile or residence, it leaves it up to the states to determine what documents would be adequate proof of residence—and even suggests that a utility bill or bank statement might be appropriate documentation. If so, a person could easily generate multiple proof-of-residence documents. Basing Real ID on such easy-to-forge documents obviates a large portion of what Real ID is supposed to accomplish.

Finally, and perhaps most importantly for Americans, the very last paragraph of the 160-page Real ID document deserves special attention. In a nod to states’ rights advocates, DHS declares that states are free not to participate in the Real ID system if they choose—but any identification card issued by a state that does not meet Real ID criteria is to be clearly labeled as such, to include “bold lettering” or a “unique design” similar to how many states design driver’s licenses for those under 21 years of age.

In its own guidance document, the department has proposed branding citizens not possessing a Real ID card in a manner that lets all who see their official state-issued identification know that they’re “different,” and perhaps potentially dangerous, according to standards established by the federal government. They would become stigmatized, branded, marked, ostracized, segregated. All in the name of protecting the homeland; no wonder this provision appears at the very end of the document.

One likely outcome of this DHS-proposed social segregation is that people presenting non-Real ID identification automatically will be presumed suspicious and perhaps subject to additional screening or surveillance to confirm their innocence at a bar, office building, airport, or routine traffic stop. Such a situation would establish a new form of social segregation—an attempt to separate “us” from “them” in the age of counterterrorism and the new normal, where one is presumed suspicious until proven more suspicious.

Two other big-picture concerns about Real ID come to mind: Looking at the overall concept of a national identification database, and given existing data security controls in large distributed systems, one wonders how vulnerable this system-of-systems will be to data loss or identity theft resulting from unscrupulous employees, flawed technologies, external compromises or human error—even under the best of security conditions. And second, there is no clear guidance on the limits of how the Real ID database would be used. Other homeland security initiatives, such as the Patriot Act, have been used and applied—some say abused—for purposes far removed from anything related to homeland security. How can we ensure the same will not happen with Real ID?

As currently proposed, Real ID will fail for several reasons. From a technical and implementation perspective, there are serious questions about its operational abilities both to protect citizen information and resist attempts at circumvention by adversaries. Financially, the initial unfunded $11 billion cost, forced onto the states by the federal government, is excessive. And from a sociological perspective, Real ID will increase the potential for expanded personal surveillance and lay the foundation for a new form of class segregation in the name of protecting the homeland.

It’s time to rethink some of the security decisions made during the emotional aftermath of 9/11 and determine whether they’re still a good idea for homeland security and America. After all, if Real ID was such a well-conceived plan, Maine and 22 other states wouldn’t be challenging it in their legislatures or rejecting the Real ID concept for any number of reasons. But they are.

And we as citizens should, too. Let the debate begin.

Me on REAL-ID:
http://www.schneier.com/essay-160.html

DHS guidance document:
http://news.com.com/…
On May 8, I testified in front of the Senate Judiciary Committee on REAL ID. Written testimony, and video, on the website.
http://judiciary.senate.gov/hearing.cfm?id=2746
http://www.washingtonpost.com/wp-dyn/content/…
This essay was written with Richard Forno, and appeared on News.com:
http://news.com.com/…
Status of anti-REAL-ID legislation:
http://www.realnightmare.org/news/105/


Least Risk Bomb Location

This fascinating tidbit is from “Aviation Week and Space Technology” (April 9, 2007, p. 21), in David Bond’s “Washington Outlook” column (unfortunately, not online).

“Security and society’s litigious bent combine to make airlines unsuited for figuring out the best place to put a suspected explosive device discovered during a flight, AirTran Airways tells the FAA (Federal Aviation Administration). Commenting on a proposed rule that would require, among other things, designation of a ‘least risk bomb location’ (LRBL)—the place on an aircraft where a bomb would do the least damage if it exploded—AirTran engineering director Rick Shideler says it’s hard for airlines to get aircraft design information related to such a location because of agreements between manufacturers and the Homeland Security Department. The carrier got LRBL information for its 717s and 737s from Boeing but can’t find out why the locations were chosen, ‘or even who specifically picked them,’ because of liability laws.”

I’d never heard of an LRBL before, but the FAA has public proposed guidelines on them. Apparently flight crews are trained to stash suspicious objects there.

But liability seems to be getting in the way of security and common sense here. It seems reasonable that an airline’s engineering director should be allowed to understand the technical reasoning behind the choice of LRBL, and maybe even give the manufacturer feedback on it.

When I posted this to my blog, a pilot commented: “The designation of a ‘least risk bomb location’ is nothing new. All planes have a designated area where potentially dangerous packages should be placed. Usually it’s in the back, adjacent to a door. There are a slew of procedures to be followed if an explosive device is found on board: depressurizing the plane, moving the item to the LRBL, and bracing/smothering it with luggage and other dense materials so that the force of the blast is directed outward, through the door.”

Probably won’t help, but you’ve got to put the damn thing somewhere.

FAA guidelines:
http://search.google.dot.gov/FAA/…


Social Engineering Notes

This is a fantastic story of a major prank pulled off at the Super Bowl this year. Basically, five people smuggled more than a quarter of a ton of material into Dolphin Stadium in order to display their secret message on TV.

Given all the security, it’s amazing how easy it was for them to become part of the security perimeter with all that random stuff. But to those of us who follow this thing, it shouldn’t be. His observations are spot on:

1. Wear a suit.
2. Wear a Bluetooth headset.
3. Pretend to be talking loudly to someone on the other line.
4. Carry a clipboard.
5. Be white.

Again, no surprise here. But it makes you wonder what’s the point of annoying the hell out of ordinary citizens with security measures (like pat-down searches) when the emperor has no clothes.

Someone who crashed the Oscars last year gave similar advice: “Show up at the theater, dressed as a chef carrying a live lobster, looking really concerned.”

On a much smaller scale, here’s someone’s story of social engineering a bank branch: “I enter the first branch at approximately 9:00AM. Dressed in Dickies coveralls, a baseball cap, work boots and sunglasses I approach the young lady at the front desk. ‘Hello,’ I say. ‘John Doe with XYZ Pest Control, here to perform your pest inspection.’ I flash her the smile followed by the credentials. She looks at me for a moment, goes ‘Uhm… okay… let me check with the branch manager…’ and picks up the phone. I stand around twiddling my thumbs and wait while the manager is contacted and confirmation is made. If all goes according to plan, the fake emails I sent out last week notifying branch managers of our inspection will allow me access. It does.”

Social engineering is surprisingly easy. As I said in “Beyond Fear” (page 144): “Social engineering will probably always work, because so many people are by nature helpful and so many corporate employees are naturally cheerful and accommodating. Attacks are rare, and most people asking for information or help are legitimate. By appealing to the victim’s natural tendencies, the attacker will usually be able to cozen what she wants.”

All it takes is a good cover story.

Zug prank:
http://www.zug.com/pranks/super/
http://www.zug.com/pranks/super/press_release.html
http://cockeyed.com/pranks/hargrave/superbowl01.shtml

Some think it is a hoax:
http://www.engadget.com/2007/03/17/…
Others don’t:
http://.wired.com/tableofmalcontents/2007/03/…
http://cockeyed.com/pranks/hargrave/…

Stadium pat-down searches:
http://www.aclu.org/crimjustice/searchseizure/…

Dave Barry on stadium security:
https://www.schneier.com/blog/archives/2007/02/…

Crashing the Oscars:
https://www.schneier.com/blog/archives/2006/03/…

Social engineering a bank branch:
http://www.protokulture.net/?p=79


Schneier/BT Counterpane News

Video and audio of my March 21 talk at the British Computer Society, on information security trends and economic considerations.
http://www.bcs.org/server.php?show=ConWebDoc.11190
http://www.schneier.com/schneier-mar07.ogg

Video an audio of my April 3 talk at Macalester College titled “Counterterrorism in America: Security Theater Against Movie-Plot Threats.”
http://www.macalester.edu/whatshappening/audio/…
http://www.macalester.edu/whatshappening/audio/…
Schneier is speaking at the Web Security Summit on May 23 in Johannesburg, South Africa:
http://www.itweb.co.za/events/securitysummit/2007/…

Schneier is speaking at Cisco Security 2007 on May 31 in Oslo, Norway:
http://www.cisco.no/security2007

Schneier is speaking at the Gartner IT Security Summit on June 4 in Washington DC:
http://www.gartner.com/2_events/conferences/sec13.jsp

Schneier is speaking at the ACLU Biennial Conference on June 14 in Seattle:
http://action.aclu.org/site/Calendar/397839578?…


1933 Anti-Spam Doorbell

Here’s a great description of an anti-spam doorbell from 1933. A visitor had to deposit a dime into a slot to make the doorbell ring. If the homeowner appreciated the visit, he would return the dime. Otherwise, the dime became the cost of disturbing the homeowner.

This kind of system has been proposed for e-mail as well: the sender has to pay the receiver—or someone else in the system—a nominal amount for each e-mail sent. This money is returned if the e-mail is wanted, and forfeited if it is spam. The result would be to raise the cost of sending spam to the point where it is uneconomical.

I think it’s worth comparing the two systems—the doorbell system and the e-mail system—to demonstrate why it won’t work for spam.

The doorbell system fails for three reasons: the percentage of annoying visitors is small enough to make the system largely unnecessary, visitors don’t generally have dimes on them (presumably fixable if the system becomes ubiquitous), and it’s too easy to successfully bypass the system by knocking (not true for an apartment building).

The anti-spam system doesn’t suffer from the first two problems: spam is an enormous percentage of total e-mail, and an automated accounting system makes the financial mechanics easy. But the anti-spam system is too easy to bypass, and it’s too easy to hack. And once you set up a financial system, you’re simply inviting hacks.

The anti-spam system fails because spammers don’t have to send e-mail directly—they can take over innocent computers and send it from them. So it’s the people whose computers have been hacked into, victims in their own right, who will end up paying for spam. This risk can be limited by letting people put an upper limit on the money in their accounts, but it is still serious.

And criminals can exploit the system in the other direction, too. They could hack into innocent computers and have them send “spam” to their email addresses, collecting money in the process.

Trying to impose some sort of economic penalty on unwanted e-mail is a good idea, but it won’t work unless the endpoints are trusted. And we’re nowhere near that trust today.

http://.modernmechanix.com/2007/05/05/…


Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of “Information Security,” as the second half of a point/counterpoint with Marcus Ranum.
http://informationsecurity.techtarget.com/magItem/…
Marcus’s half:
http://www.ranum.com/security/computer_security/…


Is Penetration Testing Worth It?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response—and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of “Information Security,” as the first half of a point/counterpoint with Marcus Ranum.
http://informationsecurity.techtarget.com/magItem/…
Marcus’s half:
http://www.ranum.com/security/computer_security/…


Do We Really Need a Security Industry?

Last week, I attended the Infosecurity Europe conference in London. Like at the RSA Conference in February, the show floor was chockablock full of network, computer and information security companies. As I often do, I mused about what it means for the IT industry that there are thousands of dedicated security products on the market: some good, more lousy, many difficult even to describe. Why aren’t IT products and services naturally secure, and what would it mean for the industry if they were?

I mentioned this in an interview with Silicon.com, and the published article seems to have caused a bit of a stir. Rather than letting people wonder what I really meant, I thought I should explain.

The primary reason the IT security industry exists is because IT products and services aren’t naturally secure. If computers were already secure against viruses, there wouldn’t be any need for antivirus products. If bad network traffic couldn’t be used to attack computers, no one would bother buying a firewall. If there were no more buffer overflows, no one would have to buy products to protect against their effects. If the IT products we purchased were secure out of the box, we wouldn’t have to spend billions every year making them secure.

Aftermarket security is actually a very inefficient way to spend our security dollars; it may compensate for insecure IT products, but doesn’t help improve their security. Additionally, as long as IT security is a separate industry, there will be companies making money based on insecurity—companies who will lose money if the internet becomes more secure.

Fold security into the underlying products, and the companies marketing those products will have an incentive to invest in security upfront, to avoid having to spend more cash obviating the problems later. Their profits would rise in step with the overall level of security on the internet. Initially we’d still be spending a comparable amount of money per year on security—on secure development practices, on embedded security and so on—but some of that money would be going into improving the quality of the IT products we’re buying, and would reduce the amount we spend on security in future years.

I know this is a utopian vision that I probably won’t see in my lifetime, but the IT services market is pushing us in this direction. As IT becomes more of a utility, users are going to buy a whole lot more services than products. And by nature, services are more about results than technologies. Service customers—whether home users or multinational corporations—care less and less about the specifics of security technologies, and increasingly expect their IT to be integrally secure.

Eight years ago, I formed Counterpane Internet Security on the premise that end users (big corporate users, in this case) really don’t want to have to deal with network security. They want to fly airplanes, produce pharmaceuticals or do whatever their core business is. They don’t want to hire the expertise to monitor their network security, and will gladly farm it out to a company that can do it for them. We provided an array of services that took day-to-day security out of the hands of our customers: security monitoring, security-device management, incident response. Security was something our customers purchased, but they purchased results, not details.

Last year, BT bought Counterpane, further embedding network security services into the IT infrastructure. BT has customers that don’t want to deal with network management at all; they just want it to work. They want the internet to be like the phone network, or the power grid, or the water system; they want it to be a utility. For these customers, security isn’t even something they purchase: It’s one small part of a larger IT services deal. It’s the same reason IBM bought ISS: to be able to have a more integrated solution to sell to customers.

This is where the IT industry is headed, and when it gets there, there’ll be no point in user conferences like Infosec and RSA. They won’t go away; they’ll simply become industry conferences. If you want to measure progress, look at the demographics of these conferences. A shift toward infrastructure-geared attendees is a measure of success.

Of course, security products won’t disappear—at least, not in my lifetime. There’ll still be firewalls, antivirus software and everything else. There’ll still be startup companies developing clever and innovative security technologies. But the end user won’t care about them. They’ll be embedded within the services sold by large IT outsourcing companies like BT, EDS and IBM, or ISPs like EarthLink and Comcast. Or they’ll be a check-box item somewhere in the core switch.

IT security is getting harder—increasing complexity is largely to blame—and the need for aftermarket security products isn’t disappearing anytime soon. But there’s no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it’s helpful in spotting SQL injection attacks. The whole IT security industry is an accident—an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work—and the details of how it works won’t matter.

http://software.silicon.com/security/…
http://www.techworld.com/security/s/index.cfm?…
http://techdigest.tv/2007/04/security_guru_q.html
http://www.itbusinessedge.com/s/top/?p=114

Complexity and security:
http://www.schneier.com/crypto-gram-0003.html#8

Commentary on essay:
http://www.networkworld.com/community/?q=node/14813
http://it.slashdot.org/it/07/05/03/1936237.shtml
http://matt-that.com/?p=5

This essay originally appeared in Wired:
http://www.wired.com/politics/security/commentary/…


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.

Copyright (c) 2007 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.