Crypto-Gram

May 15, 2008

by Bruce Schneier
Founder and CTO
BT Counterpane
schneier@schneier.com
http://www.schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0805.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Happy Ten-Year Anniversary

Ten years ago I started Crypto-Gram. It was a monthly newsletter written entirely by me. No guest columns. No advertising. Nothing but me writing about security, published the 15th of the month every month. Now, 120 issues later, none of that has changed.

I started Crypto-Gram because I had a lot to say about security, and book-length commentaries were too slow and too infrequent. Sure, I was writing the occasional column in the occasional magazine, but those were also too slow and infrequent. Crypto-Gram was supposed to be my personal voice on security, sent directly to those who wanted to read it.

I originally thought about charging for Crypto-Gram. I knew of several newsletters that funded themselves through subscription fees, and figured that a couple of hundred subscribers at $150 or so would sustain itself very nicely. I don’t remember why I decided not to—did someone convince me, or did I figure it out myself—but it was easily the smartest decision I made about this newsletter. If I’d charged money for the thing, no one would have read it. Since I didn’t, lots of people subscribed.

There were 457 subscribers by the end of the first day. After that, circulation climbed slowly and steadily. Here are the totals for May of each year:

1999 15964
2000 33827
2001 45832
2002 58046
2003 66368
2004 75907
2005 83835
2006 87839
2007 92488
2008 98618

Those numbers hide a lot of readers, like the tens of thousands that read Crypto-Gram via the Web. I also know of people that forward my newsletter to hundreds of others. There are many foreign translations that have their own subscription list. These days I estimate that I have about 25,000 newsletter readers not included in those numbers.

I have no idea where the initial batch of subscribers came from. Nor do I remember how people subscribed before the webpage form was done. I do remember my first big burst of subscribers, though. It was following my special issue after 9/11. I wrote something short for the September issue, but I found that I couldn’t stop writing. Two weeks later, I published a special issue on the terrorist attacks. Readers forwarded that issue again and again, and I ended up with many new subscribers as a result.

Reader comments began earlier, in December 1998. I found I was getting some really intelligent comments from my readers—especially those that disagreed with me—and I wanted to publish some of them. Some of the disagreements were nasty. In October 1998, I started a column called “The Doghouse,” where I made fun of snake-oil security products. Some of the companies didn’t like being so characterized, and sent me threatening legal letters.

Turns out that publishing those sorts of threats as letters to Crypto-Gram was the best defense, even though my lawyers always discouraged it. None of these incidents ever went past the threatening stage, even though court papers were occasionally filed.

Over the years, Crypto-Gram’s focus has changed. Initially, it was all cryptography. Then, more computer and network security. Then—especially after 9/11—more general security: terrorism, airplanes, ID cards, voting machines, and so on. And now, more economics and psychology of security. My career has been a progression from the specific to the general, and Crypto-Gram has generalized to reflect that.

The next big change to Crypto-Gram came in October 2004. I had been reading about blogging, and wondered for several months if switching Crypto-Gram over to blog format was a good idea or not. Again, it was about speed and frequency. I found that others were commenting on security stories faster, and that by the time Crypto-Gram would come out, people had already linked to other stories. A blog would allow me to get my commentary out even faster, and to be part of the initial discussions.

I went back and forth. Several people advised me to change, that blogging was the format of the future. I was skeptical, preferring to push my newsletter into my readers’ mailboxes every month. I sent a survey to 400 of my subscribers—200 random subscribers and 200 people who had subscribed within the past month—asking. My eventual solution was the second smartest thing I did with this newsletter: to do both.

The Schneier on Security blog started out as Crypto-Gram entries, delivered daily. And the early blog entries looked a lot like Crypto-Gram articles, with links at the end. Over the following months I learned more about the blogging style, and the entries started looking more like blog entries. Now the blog is primary, and on the 15th of every month I take the previous month’s blog entries and reconfigure them into Crypto-Gram format. Even today, most readers prefer to receive Crypto-Gram in their e-mail box every month—even if they also read the blog online.

These days, I like both. I like the immediacy of the blog, and I like the e-mail format of Crypto-Gram. And even after ten years, I still like the writing.

People often ask me where I find the time to do all of that writing. It’s an odd question for me, because it’s what I enjoy doing. I find time at home, on airplanes, in hotel rooms, everywhere. Writing isn’t a chore—okay, maybe sometimes it is—it’s something that relaxes me. I enjoy putting my ideas down in a coherent narrative flow. And there’s nothing that pleases me more than the fact that people read it.

The best fan mail I get from a reader says something like: “You changed the way I think.” That’s what I want to do. I want to change the way you think about security. I want to change the way you think about threats, and risk, and trade-offs, about security products and services, about security rhetoric in politics. It matters less if you agree with me or disagree, only that you’re thinking differently.

Thank you. Thank you on this 10th anniversary issue. Thank you, long-time readers. Thank you, new readers. Thank you for continuing to read what I have to write. This is still a lot of fun—and interesting and thought provoking—for me. I hope it continues to be interesting, thought provoking, and fun for you.

Crypto-Gram page, including information about translations:
http://www.schneier.com/crypto-gram.html

Crypto-Gram back issues:
http://www.schneier.com/crypto-gram-back.html

First blog post:
https://www.schneier.com/blog/archives/2004/10/…


Dual-Use Technologies and the Equities Issue

On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and—in many cases—shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace. But nearly a year later, evidence that the Russian government was involved in the denial-of-service attacks still hasn’t emerged. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were pissed off over the statue incident.

You know you’ve got a problem when you can’t tell a hostile attack by another nation from bored kids with an axe to grind.

Separating cyberwar, cyberterrorism and cybercrime isn’t easy; these days you need a scorecard to tell the difference. It’s not just that it’s hard to trace people in cyberspace, it’s that military and civilian attacks—and defenses—look the same.

The traditional term for technology the military shares with civilians is “dual use.” Unlike hand grenades and tanks and missile targeting systems, dual-use technologies have both military and civilian applications. Dual-use technologies used to be exceptions; even things you’d expect to be dual use, like radar systems and toilets, were designed differently for the military. But today, almost all information technology is dual use. We both use the same operating systems, the same networking protocols, the same applications, and even the same security software.

And attack technologies are the same. The recent spurt of targeted hacks against U.S. military networks, commonly attributed to China, exploit the same vulnerabilities and use the same techniques as criminal attacks against corporate networks. Internet worms make the jump to classified military networks in less than 24 hours, even if those networks are physically separate. The Navy Cyber Defense Operations Command uses the same tools against the same threats as any large corporation.

Because attackers and defenders use the same IT technology, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows: when a military discovers a vulnerability in a dual-use technology, they can do one of two things. They can alert the manufacturer and fix the vulnerability, thereby protecting both the good guys and the bad guys. Or they can keep quiet about the vulnerability and not tell anyone, thereby leaving the good guys insecure but also leaving the bad guys insecure.

The equities issue has long been hotly debated inside the NSA. Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.

In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.

So now we’re seeing the NSA help secure Windows Vista and releasing their own version of Linux. The DHS, meanwhile, is funding a project to secure popular open source software packages, and across the Atlantic the UK’s GCHQ is finding bugs in PGPDisk and reporting them back to the company. (NSA is rumored to be doing the same thing with BitLocker.)

I’m in favor of this trend, because my security improves for free. Whenever the NSA finds a security problem and gets the vendor to fix it, our security gets better. It’s a side-benefit of dual-use technologies.

But I want governments to do more. I want them to use their buying power to improve my security. I want them to offer countrywide contracts for software, both security and non-security, that have explicit security requirements. If these contracts are big enough, companies will work to modify their products to meet those requirements. And again, we all benefit from the security improvements.

The only example of this model I know about is a U.S. government-wide procurement competition for full-disk encryption, but this can certainly be done with firewalls, intrusion detection systems, databases, networking hardware, even operating systems.

When it comes to IT technologies, the equities issue should be a no-brainer. The good uses of our common hardware, software, operating systems, network protocols, and everything else vastly outweigh the bad uses. It’s time that the government used its immense knowledge and experience, as well as its buying power, to improve cybersecurity for all of us.

Estonia’s cyberwar:
http://www.wired.com/politics/security/magazine/…
http://.wired.com/27bstroke6/2008/01/…

Cyberwar, cyberterrorism, etc.
https://www.schneier.com/blog/archives/2007/06/…

NSA and DHS cybersecurity initiatives:
https://www.schneier.com/blog/archives/2007/01/…
http://www.nsa.gov/selinux/
http://www.eweek.com/c/a/Security/…
https://www.schneier.com/blog/archives/2007/01/…

This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/commentary/…


Crossing Borders with Laptops and PDAs

Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you’re entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.

But the US is not alone. British customs agents search laptops for pornography. And there are reports on the internet of this sort of thing happening at other borders, too. You might not like it, but it’s a fact. So how do you protect yourself?

Encrypting your entire hard drive, something you should certainly do for security in case your computer is lost or stolen, won’t work here. The border agent is likely to start this whole process with a “please type in your password.” Of course you can refuse, but the agent can search you further, detain you longer, refuse you entry into the country and otherwise ruin your day.

You’re going to have to hide your data. Set a portion of your hard drive to be encrypted with a different key – even if you also encrypt your entire hard drive – and keep your sensitive data there. Lots of programs allow you to do this. I use PGP Disk (from pgp.com). TrueCrypt (truecrypt.org) is also good, and free.

While customs agents might poke around on your laptop, they’re unlikely to find the encrypted partition. (You can make the icon invisible, for some added protection.) And if they download the contents of your hard drive to examine later, you won’t care.

Be sure to choose a strong encryption password. Details are too complicated for a quick tip, but basically anything easy to remember is easy to guess. Unfortunately, this isn’t a perfect solution. Your computer might have left a copy of the password on the disk somewhere, and smart forensic software will find it.

So your best defence is to clean up your laptop. A customs agent can’t read what you don’t have. You don’t need five years’ worth of email and client data. You don’t need your old love letters and those photos (you know the ones I’m talking about). Delete everything you don’t absolutely need. And use a secure file erasure program to do it. While you’re at it, delete your browser’s cookies, cache and browsing history. It’s nobody’s business what websites you’ve visited. And turn your computer off – don’t just put it to sleep – before you go through customs; that deletes other things. Think of all this as the last thing to do before you stow your electronic devices for landing. Some companies now give their employees forensically clean laptops for travel, and have them download any sensitive data over a virtual private network once they’ve entered the country. They send any work back the same way, and delete everything again before crossing the border to go home. This is a good idea if you can do it.

If you can’t, consider putting your sensitive data on a USB drive or even a camera memory card: even 16GB cards are reasonably priced these days. Encrypt it, of course, because it’s easy to lose something that small. Slip it in your pocket, and it’s likely to remain unnoticed even if the customs agent pokes through your laptop. If someone does discover it, you can try saying: “I don’t know what’s on there. My boss told me to give it to the head of the New York office.” If you’ve chosen a strong encryption password, you won’t care if he confiscates it.

Lastly, don’t forget your phone and PDA. Customs agents can search those too: emails, your phone book, your calendar. Unfortunately, there’s nothing you can do here except delete things.

I know this all sounds like work, and that it’s easier to just ignore everything here and hope you don’t get searched. Today, the odds are in your favour. But new forensic tools are making automatic searches easier and easier, and the recent US court ruling is likely to embolden other countries. It’s better to be safe than sorry.

My advice on choosing secure passwords:
http://www.schneier.com/essay-148.html

This essay originally appeared in The Guardian:
http://www.guardian.co.uk/technology/2008/may/15/…


News

I previously wrote about the UK’s Regulation of Investigatory Powers Act (RIPA), which was sold as a means to tackle terrorism and other serious crimes, being used against animal rights protestors. The latest news from the UK is that a local council has used provisions of the act to put a couple and their children under surveillance, for “suspected fraudulent school place applications”:
http://news.bbc.co.uk/1/hi/england/dorset/7341179.stm
http://www.theregister.co.uk/2008/04/11/…
http://news.bbc.co.uk/1/hi/england/dorset/7343445.stm
https://www.schneier.com/blog/archives/2007/11/…

A researcher talks about humans’ inherent capability for evil:
http://www.independent.co.uk/news/people/…
Comparing cybersecurity to early 1800s security on the high seas:
http://www.csoonline.com/article/print/329164

Usually I don’t bother writing about these, but this data leak in Oklahoma is particularly bad. Anyone with basic SQL knowledge could have registered anyone he wanted as a sex offender.
http://thedailywtf.com/Articles/…
Funny surveillance camera photos:
http://www.flickr.com/photos/spiggycat/2393460671/…
http://www.dailymail.co.uk/pages/live/articles/news/…
http://www.banksy.co.uk/outdoors/images/landscapes/…
http://www.artofthestate.co.uk/photos/…
Homeland Security Secretary Michael Chertoff says fingerprints aren’t personal data.
http://thinkprogress.org/2008/04/16/…
Sounds like he’s confusing secret data with personal data. Lots of personal data isn’t particularly secret.

I am sick of the story that people are willing to reveal their passwords for a bar of chocolate. I haven’t seen any indication they actually verified that the passwords are real. I would certainly give up a fake password for a bar of chocolate.
http://s.wsj.com/biztech/2008/04/16/…
Airport security game:
http://www.shockwave.com/gamelanding/…

The TSA wants a tool that will assess risks against transportation networks.
http://www.gsnmagazine.com/cms/resources/…
I don’t think you have to be very good to qualify here. Another automated system put Boise, ID, on the top of its list of most vulnerable cities. The bar isn’t very high here; I’m just saying.
http://www.washingtonpost.com/wp-dyn/content/…
This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it? Turns out you can.
http://www.cs.cmu.edu/~dbrumley/pubs/apeg.html

Hacking ISP error pages. It’s a big deal.
http://.wired.com/27bstroke6/2008/04/…
http://www.theregister.co.uk/2008/04/20/…

This won best-paper award at the First USENIX Workshop on Large-Scale Exploits and Emergent Threat; it’s about maliciously designing processors to support hacking:
http://www.usenix.org/event/leet08/tech/full_papers/…
Theoretical? Sure. But combine this with stories of counterfeit computer hardware from China, and you’ve got yourself a potentially serious problem.
http://www.hardwareanalysis.com/content/article/…
List of deaths, intended to prevent identity theft, is used for identity theft.
http://.wired.com/27bstroke6/2008/04/…

Protect your laptop screen from roving eyes, a low-tech solution:
http://www.engadget.com/2008/04/16/…
Boring jobs dull the mind. We already knew this, but it’s good to reinforce the lesson.
http://news.bbc.co.uk/2/hi/science/nature/7358863.stm
This video demonstrates the point nicely.
http://www.youtube.com/watch?v=Ahg6qcgoay4

Interesting investigative article from Business Week on Chinese cyber espionage against the U.S. government, and the government’s reaction.
http://www.businessweek.com/magazine/content/08_16/…

Will we ever win the war on photographers?
http://www.memphisflyer.com/memphis/Content?…

Virtual kidnapping is a real crime in Mexico:
http://www.nytimes.com/2008/04/29/world/americas/…

This picture, demonstrating a SQL injection attack against automatic license plate scanners, is almost certainly Photoshopped, and a joke, but it’s certainly a clever idea. As automatic license plate scanners become more common, why not?
http://www.areino.com/hackeando/
http://www.schneier.com/essay-057.html
Reminds me of this xkcd cartoon:
http://xkcd.com/327/

Funny DHS “Neighborhood Watch” hoax:
http://dhsnnw.org/index.html
http://itp.nyu.edu/s/ecm292_thesis/

Microsoft is distributing a USB drive filled with Windows forensic analysis tools for police:
https://www.schneier.com/blog/archives/2008/04/…

Heroin vs. terrorism: A nice essay on security trade-offs.
http://www.timesonline.co.uk/tol/comment/columnists/…
Snarky essay on what to worry about:
http://tencartrain.com/?p=627

Sky marshals on the no-fly list. If it weren’t so sad, it would be funny.
http://www.washingtontimes.com/apps/pbcs.dll/…
http://www.economist.com/s/gulliver/2008/05/…
I just received the second edition of Ross Anderson’s “Security Engineering” in the mail. It’s beautiful. This is the best book on the topic there is, and I recommend it to everyone working in this field—and not just because I wrote the foreword. You can download the preface and six chapters. (You can also download the entire first edition.)
http://www.amazon.com/…
http://www.cl.cam.ac.uk/~rja14/bruce.html
http://www.cl.cam.ac.uk/~rja14/book.html

U.S. State Department loses hundreds of laptops. Bet you anything they were not encrypted.
http://www.cqpolitics.com/wmspage.cfm?…
London’s cameras don’t reduce crime. This is, of course, absolutely no surprise.
http://news.bbc.co.uk/1/hi/uk/7384843.stm
http://www.guardian.co.uk/uk/2008/may/06/ukcrime1
https://www.schneier.com/blog/archives/2008/05/…

Al Qaeda threat overrated:
http://www.newsweek.com/id/135654/

Remember the two men who were exhibiting “unusual behavior” on a Washington-state ferry last summer? Turns out they were tourists, not terrorists.
https://www.schneier.com/blog/archives/2008/05/…

Excellent article chronicling the U.S. surveillance debate from the mid-1980s until today. Don’t expect good coverage of the current debate, however: the legality of the NSA’s recent domestic eavesdropping program, and the legality of the telcos’ assistance.
http://www.govexec.com/dailyfed/0408/042208nj1.htm

A handy guide to cell phone spying:
http://www.geeksaresexy.net/2008/05/05/…
I don’t know what I think of Sweet Dreams Security and its attempts to make security cuddly.
http://www.deardad.net/sds-html/
https://www.schneier.com/blog/archives/2008/05/…

Terrorism as a tax—certainly a good way to look at it.
https://www.schneier.com/blog/archives/2008/05/…

Interesting Microsoft patent application: Guardian Angel.
http://appft1.uspto.gov/netacgi/nph-Parser?…
Note that Bill Gates and Ray Ozzie are co-inventers.

The Department of Homeland Security has a new $200 million Comprehensive National Cybersecurity Inititative (CNCI). Congress is happy to fund it, but kind of wants to know what it’s going to do. I have to admit, I’m kind of curious myself.
http://.wired.com/27bstroke6/2008/05/…
http://arstechnica.com/news.ars/post/…

The U.S. Air Force considers creating its own botnet. Actually, I think this is a fine idea—as long as they only use computers that they legally own.
http://www.armedforcesjournal.com/2008/05/3375884
http://arstechnica.com/news.ars/post/…


Third Annual Movie-Plot Threat Contest Winner

On April 7, in my blog—seven days late—I announced the Third Annual Movie-Plot Threat Contest:

“For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

“Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.”

On May 7, I posted five semi-finalists out of the 327 blog comments:

* DNA adulteratometer to detect waiters spitting in your soup.

* Toothpaste test strips.

* SOS device for people locked in car trunks.

* Anti-laser-pointer eyeglasses.

* “Alertness alert” heartbeat monitor.

Sadly, two of those five were above the 150-word limit. Out of the three remaining, I (with the help of my readers) have chosen a winner.

Presenting, the winner of the Third Annual Movie Plot Threat Contest, Aaron Massey:

“Tommy Tester Toothpaste Strips:

“Many Americans were shocked to hear the results of the research trials regarding heavy metals and toothpaste conducted by the New England Journal of Medicine, which FDA is only now attempting to confirm. This latest scare comes after hundreds of deaths were linked to toothpaste contaminated with diethylene glycol, a potentially dangerous chemical used in antifreeze.

“In light of this continuing health risk, Hamilton Health Labs is proud to announce Tommy Tester Toothpaste Strips! Just apply a dab of toothpaste from a fresh tube onto the strip and let it rest for 3 minutes. It’s just that easy! If the strip turns blue, rest assured that your entire tube of toothpaste is safe. However, if the strip turns pink, dispose of the toothpaste immediately and call the FDA health emergency number at 301-443-1240.

“Do not let your family become a statistic when the solution is only $2.95!”

Aaron wins, well, nothing really, except the fame and glory afforded by Crypto-Gram and my blog. So give him some fame and glory. Congratulations.

Announcement:
https://www.schneier.com/blog/archives/2008/04/…

Semifinalists:
https://www.schneier.com/blog/archives/2008/05/…


The RSA Conference

Last week was the RSA Conference, easily the largest information security conference in the world. Over 17,000 people descended on San Francisco’s Moscone Center to hear some of the over 250 talks, attend I-didn’t-try-to-count parties, and try to evade over 350 exhibitors vying to sell them stuff.

Talk to the exhibitors, though, and the most common complaint is that the attendees aren’t buying.

It’s not the quality of the wares. The show floor is filled with new security products, new technologies, and new ideas. Many of these are products that will make the attendees’ companies more secure in all sorts of different ways. The problem is that most of the people attending the RSA Conference can’t understand what the products do or why they should buy them. So they don’t.

I spoke with one person whose trip was paid for by a smallish security firm. He was one of the company’s first customers, and the company was proud to parade him in front of the press. I asked him if he walked through the show floor, looking at the company’s competitors to see if there was any benefit to switching.

“I can’t figure out what any of those companies do,” he replied.

I believe him. The booths are filled with broad product claims, meaningless security platitudes, and unintelligible marketing literature. You could walk into a booth, listen to a five-minute sales pitch by a marketing type, and still not know what the company does. Even seasoned security professionals are confused.

Commerce requires a meeting of minds between buyer and seller, and it’s just not happening. The sellers can’t explain what they’re selling to the buyers, and the buyers don’t buy because they don’t understand what the sellers are selling. There’s a mismatch between the two; they’re so far apart that they’re barely speaking the same language.

This is a bad thing in the near term—some good companies will go bankrupt and some good security technologies won’t get deployed—but it’s a good thing in the long run. It demonstrates that the computer industry is maturing: IT is getting complicated and subtle, and users are starting to treat it like infrastructure.

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

No one wants to buy security. They want to buy something truly useful—database management systems, Web 2.0 collaboration tools, a company-wide network—and they want it to be secure. They don’t want to have to become IT security experts. They don’t want to have to go to the RSA Conference. This is the future of IT security.

You can see it in the large IT outsourcing contracts that companies are signing—not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?

Imagine if the inventor of antilock brakes—or any automobile safety or security feature—had to sell them directly to the consumer. It would be an uphill battle convincing the average driver that he needed to buy them; maybe that technology would have succeeded and maybe it wouldn’t. But that’s not what happens. Antilock brakes, airbags, and that annoying sensor that beeps when you’re backing up too close to another object are sold to automobile companies, and those companies bundle them together into cars that are sold to consumers. This doesn’t mean that automobile safety isn’t important, and often these new features are touted by the car manufacturers.

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products, and new start-ups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

“Death of the Security Industry”:
http://www.schneier.com/essay-196.html

Industry consolidation:
http://www.schneier.com/essay-209.html

Commentary:
http://www.computerweekly.com/s/david_lacey/…
This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/news/2008/04/…


Risk Preferences in Chimpanzees and Bonobos

I’ve already written about prospect theory, which explains how people approach risk. People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses:

“Evolutionarily, presumably it is a better survival strategy to—all other things being equal, of course—accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses. Lions chase young or wounded wildebeest because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow.

“Similarly, it is evolutionarily better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. That is, both can result in death. If that’s true, the best option is to risk everything for the chance at no loss at all.”

This behavior has been demonstrated in animals as well: “species of insects, birds and mammals range from risk neutral to risk averse when making decisions about amounts of food, but are risk seeking towards delays in receiving food.”

A recent study examines the relative risk preferences in two closely related species: chimanzees and bonobos. “Human and non-human animals tend to avoid risky prospects. If such patterns of economic choice are adaptive, risk preferences should reflect the typical decision-making environments faced by organisms. However, this approach has not been widely used to examine the risk sensitivity in closely related species with different ecologies. Here, we experimentally examined risk-sensitive behaviour in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus), closely related species whose distinct ecologies are thought to be the major selective force shaping their unique behavioural repertoires. Because chimpanzees exploit riskier food sources in the wild, we predicted that they would exhibit greater tolerance for risk in choices about food. Results confirmed this prediction: chimpanzees significantly preferred the risky option, whereas bonobos preferred the fixed option. These results provide a relatively rare example of risk-prone behaviour in the context of gains and show how ecological pressures can sculpt economic decision making.”

The basic argument is that in the natural environment of the chimpanzee, if you don’t take risks you don’t get any of the high-value rewards (e.g., monkey meat). Bonobos “rely more heavily than chimpanzees on terrestrial herbaceous vegetation, a more temporally and spatially consistent food source.” So chimpanzees are less likely to avoid taking risks.

Fascinating stuff, but there are at least two problems with this study. The first one, the researchers explain in their paper. The animals studied—five of each species—were from the Wolfgang Koehler Primate Research Center at the Leipzig Zoo, and the experimenters were unable to rule out differences in the “experiences, cultures and conditions of the two specific groups tested here.”

The second problem is more general: we know very little about the life of bonobos in the wild. There’s a lot of popular stereotypes about bonobos, but they’re sloppy at best.

Even so, I like seeing this kind of research. It’s fascinating.

Blog entry URL:
https://www.schneier.com/blog/archives/2008/04/…

http://journals.royalsociety.org/content/…
Prospect theory:
http://www.schneier.com/essay-155.html

Bonobos in the wild:
http://www.newyorker.com/reporting/2007/07/30/…


Schneier/BT News

Audio and video of Schneier’s talk from InfoSecurity Europe on “Reconceptualizing Security,” or maybe “The Theater of Security.”
http://www.yada-yada.co.uk/podcasts/ReedExhibitions/…
http://www.yada-yada.co.uk/podcasts/ReedExhibitions/…
Schneier was interviewed on Dutch radio. The introduction and questions are in Dutch, but the answers are in English.
http://www.xs4all.nl/~herbertb/2000+/Radio/…
Schneier was interviewed on Anti War Radio. It was an odd interview, starting from the essay “Portrait of the Modern Terrorist as an Idiot” and then meandering into the role of government versus corporations in security.
http://antiwar.com/radio/2008/04/11/bruce-schneier/

This Schneier interview was conducted on video even though it is presented as text, so it doesn’t read as well as the ones I’ve done via e-mail.
http://computerworld.com/action/article.do?…
Another Schneier interview, conducted at the RSA Conference:
http://techwebtv.feedroom.com/?…
Schneier video interview from the UK:
http://www.computerweekly.com/Articles/2008/04/30/…
Two Schneier video interviews from Australia:
http://www.builderau.com.au/video/soa/…
http://www.builderau.com.au/video/soa/…


The Doghouse: Passwordsafe.com

This isn’t my Password Safe. This is PasswordSafe.com. Password Safe is an open-source application that lives on your computer and encrypts your passwords. PasswordSafe.com lets you store your passwords on their server. They promise not to look at them.

“Can I trust PasswordSafe? As we mentioned, pretty much every function is automated, no-one here ever sees your information as it’s all taken care of by the programs and encrypted into the database. Again we’ll remind you, we do not recommend you store sensitive information at PasswordSafe. In house, we’ve used this service for many sites, banner programs, affiliate programs, free email services and much more.”

http://www.passwordsafe.com/

My Password Safe:
http://www.schneier.com/passsafe.html


The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes—mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent—or protect against—those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society—in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities—again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible—and more and less legal—ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.
http://searchsecurity.techtarget.com/…


Our Data, Ourselves

In the information age, we all have a data shadow.

We leave data everywhere we go. It’s not just our bank accounts and stock portfolios, or our itemized bills, listing every credit card purchase and telephone call we make. It’s automatic road toll collection systems, supermarket affinity cards, ATMs, and so on.

It’s also our lives. Our love letters and friendly chat. Our personal e-mails and SMS messages. Our business plans, strategies, and offhand conversations. Our political leanings and positions. And this is just the data we interact with. We all have shadow selves living in the data banks of hundreds of corporations information brokers—information about us that is both surprisingly personal and uncannily complete—except for the errors that you can neither see nor correct.

What happens to our data happens to ourselves.

This shadow self doesn’t just sit there: it’s constantly touched. It’s examined and judged. When we apply for a bank loan, it’s our data that determines whether or not we get it. When we try to board an airplane, it’s our data that determines how thoroughly we get searched—or whether we get to board at all. If the government wants to investigate us, they’re more likely to go through our data than they are to search our homes; for a lot of that data, they don’t even need a warrant.

Who controls our data controls our lives.

It’s true. Whoever controls our data can decide whether we can get a bank loan, on an airplane, or into a country. Or what sort of discount we get from a merchant, or even how we’re treated by customer support. A potential employer can, illegally in the U.S., examine our medical data and decide whether or not to offer us a job. The police can mine our data and decide whether or not we’re a terrorist risk. If a criminal can get hold of enough of our data, he can open credit cards in our names, siphon money out of our investment accounts, even sell our property. Identity theft is the ultimate proof that control of our data means control of our life.

We need to take back our data.

Our data is a part of us. It’s intimate and personal, and we have basic rights to it. It should be protected from unwanted touch.

We need a comprehensive data privacy law. This law should protect all information about us, and not be limited merely to financial or health information. It should limit others’ ability to buy and sell our information without our knowledge and consent. It should allow us to see information about us held by others, and correct any inaccuracies we find. It should prevent the government from going after our information without judicial oversight. It should enforce data deletion, and limit data collection, where necessary. And we need more than token penalties for deliberate violations.

This is a tall order, and it will take years for us to get there. It’s easy to do nothing and let the market take over. But as we see with things like grocery store club cards and click-through privacy policies on websites, most people either don’t realize the extent their privacy is being violated or don’t have any real choice. And businesses, of course, are more than happy to collect, buy, and sell our most intimate information. But the long-term effects of this on society are toxic; we give up control of ourselves.

This essay previously appeared on Wired.com.
http://www.wired.com/politics/security/commentary/…


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is Chief Security Technology Officer of British Telecom (BT), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.

Copyright (c) 2008 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.