Blog: May 2008 Archives

Bletchley Park May Close Due to Lack of Funds

Sad.

But, despite an impressive contribution to the war effort, the Bletchley Park site, now a museum, faces a bleak future unless it can secure funding to keep its doors open and its numerous exhibits from rotting away.

The Bletchley Park Trust receives no external funding. It has been deemed ineligible for funding by the National Lottery, and turned down by the Bill & Melinda Gates Foundation because the Microsoft founder will only fund internet-based technology projects.

“We are just about surviving. Money—or lack of it—is our big problem here. I think we have two to three more years of survival, but we need this time to find a solution to this,” said Simon Greenish, the Trust’s director.

As a result of lack of funds, the Trust is unable to rebuild the site’s rotting infrastructure and faces an uncertain future. “The Trust is the hardest-up museum I know,” said Greenish. “We have this huge estate to run and it’s one of the most important World War II stories there is.”

Anybody out there want to help put together a major contribution?

EDITED TO ADD (5/30): Yes, I am willing to be a focal point for donations. But I’m hoping for some major donors.

EDITED TO ADD (6/13): Donate here.

Posted on May 30, 2008 at 6:45 AM60 Comments

Vengeance

Jared Diamond on vengeance and human nature:

This question of state government’s recent origins, and, conversely, of its long failure to originate throughout most of human history, is a fundamental concern for social scientists. Until fifty-five hundred years ago, there were no state governments anywhere in the world. Even as late as 1492, all of North America, sub-Saharan Africa, Australia, New Guinea, and the Pacific islands, and most of Central and South America didn’t have states and instead operated under simpler forms of societal organization (chiefdoms, tribes, and bands). Today, though, the whole world map is divided into states. Of course, most of that extension of state government has involved existing states from elsewhere imposing their government on stateless societies, as happened in New Guinea. But the first state in world history, at least, must have arisen de novo, and we now know that states arose independently in many parts of the world. How did it happen?

[…]

…anthropologists, historians, and archeologists tell us that state governments have arisen independently under one of two sets of circumstances. Sometimes external pressure from an encroaching state has placed a people under such duress that it ceded individual rights to a government of its own that would be capable of offering effective resistance. For instance, about two centuries ago, the formerly separate Cherokee chiefdoms gradually formed a unified Cherokee government in a desperate attempt to resist pressure from whites. More frequently, chronic competition among warring non-state entities has ended when one gained a military advantage over the others by developing proto-state institutions: one example is the formation of the Zulu state by a particularly talented chief named Dingiswayo, in the early nineteenth century, out of an assortment of chiefdoms fighting each other.

[…]

We regularly ignore the fact that the thirst for vengeance is among the strongest of human emotions. It ranks with love, anger, grief, and fear, about which we talk incessantly. Modern state societies permit and encourage us to express our love, anger, grief, and fear, but not our thirst for vengeance. We grow up being taught that such feelings are primitive, something to be ashamed of and to transcend.

There is no doubt that state acceptance of every individual’s right to exact personal vengeance would make it impossible for us to coexist peacefully as fellow-citizens of the same state. Otherwise, we, too, would be living under the conditions of constant warfare prevailing in non-state societies like those of the New Guinea Highlands.

Posted on May 29, 2008 at 1:07 PM34 Comments

TPM to End Piracy

Ha ha ha ha. Famous last words from Atari founder Nolan Bushnell:

“There is a stealth encryption chip called a TPM that is going on the motherboards of most of the computers that are coming out now,” he pointed out

“What that says is that in the games business we will be able to encrypt with an absolutely verifiable private key in the encryption world—which is uncrackable by people on the internet and by giving away passwords—which will allow for a huge market to develop in some of the areas where piracy has been a real problem.”

“TPM” stands for “Trusted Platform Module.” It’s a chip that is probably already in your computer and may someday be used to enforce security: both your security, and the security of software and media companies against you. The system is complicated, and while it will prevent some attacks, there are lots of ways to hack it. (I’ve written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)

Posted on May 29, 2008 at 6:33 AM50 Comments

Spray-On Explosive Detector

Interesting:

William Trogler and his team at the University of California, San Diego, made a silafluorene-fluorene copolymer to identify nitrogen-containing explosives. It is the first of its kind to act as a switchable sensor with picogram (10-15g) detection limits, and is reported in the Royal Society of Chemistry’s Journal of Materials Chemistry.

Trogler’s polymer can detect explosives at much lower levels than existing systems because it detects particles instead of explosive vapours. In the team’s new method one simply sprays the polymer solution over the test area, let it dry, and shine UV light on it. Spots of explosive quench the fluorescent polymer and turn blue….

Posted on May 28, 2008 at 12:40 PM22 Comments

Tracking People with their Mobile Phones

Not that we didn’t think it was possible:

The surveillance mechanism works by monitoring the signals produced by mobile handsets and then locating the phone by triangulation ­ measuring the phone’s distance from three receivers.

[….]

The Information Commissioner’s Office (ICO) expressed cautious approval of the technology, which does not identify the owner of the phone but rather the handset’s IMEI code—a unique number given to every device so that the network can recognise it.

But an ICO spokesman said, “we would be very worried if this technology was used in connection with other systems that contain personal information, if the intention was to provide more detailed profiles about identifiable individuals and their shopping habits.”

Only the phone network can match a handset’s IMEI number to the personal details of a customer.

Path Intelligence, the Portsmouth-based company which developed the technology, said its equipment was just a tool for market research. “There’s absolutely no way we can link the information we gather back to the individual,” a spokeswoman said. “There’s nothing personal in the data.”

Liberty, the campaign group, said that although the data do not meet the legal definition of ‘personal information’, it “had the potential” to identify particular individuals’ shopping habits by referencing information held by the phone networks.

Seems to me that the point of sale is a pretty obvious place to match the location of an anonymous person with an identity.

EDITED TO ADD (6/13): More info.

Posted on May 27, 2008 at 12:57 PM55 Comments

Dan Geer on Security, Monoculture, Metrics, Evolution, Etc.

Here is the text and video of Dan Geer’s remarks at Source Boston 2008, basically a L0pht reunion with friends.

At the end of the day, however, we are facing a much bigger, more metaphysical question than the ones I have so far posed. That I can pose many others is of no consequence; either you are sick of them by now or you are scribbling down your own as I speak. The bigger question is this—how much security do we want?

A world without failure is a world without freedom. A world without the possibility of sin is a world without the possibility of righteousness. A world without the possibility of crime is a world where you cannot prove you are not a criminal. A technology that can give you everything you want is a technology that can take away everything that you have. At some point, real soon now, some of us security geeks will have to say that there comes a point at which safety is not safe.

Posted on May 27, 2008 at 6:23 AM19 Comments

Nasal Spray Increases Trust for Strangers

Okay; this’ll be fun. What’s the most creative abuse for this that you can think of ?

Previous studies have shown that participants in “trust games” took greater risks with their money after inhaling the hormone via a nasal spray.

In this latest experiment, published in the journal Neuron, the researchers asked volunteer subjects to take part in a similar game.

They were each asked to contribute money to a human trustee, with the understanding that the trustee would invest the money and decide whether to return the profits, or betray the subject’s trust by keeping the profit.

The subjects also received doses of oxytocin or a placebo via a nasal spray.

After investing, the participants were given feedback on the trustees. When their trust was abused, the placebo group became less willing to invest. But the players who had been given oxytocin continued to trust their money with a broker.

“We can see that oxytocin has a very powerful effect,” said Dr Baumgartner.

“The subjects who received oxytocin demonstrated no change in their trust behaviour, even though they were informed that their trust was not honoured in roughly 50% of cases.”

In a second game, where the human trustees were replaced by a computer which gave random returns, the hormone made no difference to the players’ investment behaviour.

“It appears that oxytocin affects social responses specifically related to trust,” Dr Baumgartner said.

Posted on May 26, 2008 at 1:30 PM48 Comments

How to Sell Security

It’s a truism in sales that it’s easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It’s not they don’t ever buy these things, but it’s an uphill struggle.

The reason is psychological. And it’s the same dynamic when it’s a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security, or a security officer trying to implement a security policy with her company’s employees.

It’s also true that the better you understand your buyer, the better you can sell.

First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of “economic man,” a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.

Here’s an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that the whether you’re contemplating a gain or a loss doesn’t make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn’t affect the mathematics and therefore shouldn’t affect the results. This is traditional economics, and it’s called Utility Theory.

But Kahneman’s and Tversky’s experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways—either as “200 lives saved out of 600” or “400 lives lost out of 600″—yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It’s a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It’s a choice between a small sure loss—the cost of the security product—and a large risky loss: for example, the results of an attack on one’s network. Of course there’s a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won’t happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don’t understand why, and are continually trying to frame their products in positive results. That’s why you see slogans with the basic message, “We take care of security so you can focus on your business,” or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they’re willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they’ve been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they’re not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn’t be a separate policy for employees to follow but part of overall IT policy.

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

This essay originally appeared in CIO.

Posted on May 26, 2008 at 5:57 AM35 Comments

BlackBerry Giving Encryption Keys to Indian Government

RIM encrypts e-mail between BlackBerry devices and the server the server with 256-bit AES encryption. The Indian government doesn’t like this at all; they want to snoop on the data. RIM’s response was basically: That’s not possible. The Indian government’s counter was: Then we’ll ban BlackBerries. After months of threats, it looks like RIM is giving in to Indian demands and handing over the encryption keys.

EDITED TO ADD (5/27): News:

BlackBerry vendor Research-In-Motion (RIM) said it cannot hand over the message encrytion key to the government as its security structure does not allow any ‘third party’ or even the company to read the information transferred over its network.

EDITED TO ADD (7/2): Looks like they have resolved the impasse.

Posted on May 21, 2008 at 2:09 PM50 Comments

Risk and Culture

The Second National Risk and Culture Study, conducted by the Cultural Cognition Project at Yale Law School.

Abstract:

Cultural Cognition refers to the disposition to conform one’s beliefs about societal risks to one’s preferences for how society should be organized. Based on surveys and experiments involving some 5,000 Americans, the Second National Risk and Culture Study presents empirical evidence of the effect of this dynamic in generating conflict about global warming, school shootings, domestic terrorism, nanotechnology, and the mandatory vaccination of school-age girls against HPV, among other issues. The Study also presents evidence of risk-communication strategies that counteract cultural cognition. Because nuclear power affirms rather than threatens the identity of persons who hold individualist values, for example, proposing it as a solution to global warming makes persons who hold such values more willing to consider evidence that climate change is a serious risk. Because people tend to impute credibility to people who share their values, persons who hold hierarchical and egalitarian values are less likely to polarize when they observe people who hold their values advocating unexpected positions on the vaccination of young girls against HPV. Such techniques can help society to create a deliberative climate in which citizens converge on policies that are both instrumentally sound and expressively congenial to persons of diverse values.

And from the conclusion:

Conclusion:

There is a culture war in America, but it is about facts, not values. There is very little evidence that most Americans care nearly as much about issues that symbolize competing cultural values as they do about the economy, national security, and the safety and health of themselves and their loved ones. There is ample evidence, however, that Americans are sharply divided along cultural lines about what sorts of conditions endanger these interests and what sorts of policies effectively counteract such risks.

Findings from the Second National Culture and Risk Study help to show why. Psychologically speaking, it’s much easier to believe that conduct one finds dishonorable or offensive is dangerous, and conduct one finds noble or admirable is socially beneficial, than vice versa. People are also much more inclined to accept information about risk and danger when it comes from someone who shares their values than when it comes from someone who holds opposing commitments.

Posted on May 21, 2008 at 5:19 AM17 Comments

Our Data, Ourselves

In the information age, we all have a data shadow.

We leave data everywhere we go. It’s not just our bank accounts and stock portfolios, or our itemized bills, listing every credit card purchase and telephone call we make. It’s automatic road-toll collection systems, supermarket affinity cards, ATMs and so on.

It’s also our lives. Our love letters and friendly chat. Our personal e-mails and SMS messages. Our business plans, strategies and offhand conversations. Our political leanings and positions. And this is just the data we interact with. We all have shadow selves living in the data banks of hundreds of corporations’ information brokers—information about us that is both surprisingly personal and uncannily complete—except for the errors that you can neither see nor correct.

What happens to our data happens to ourselves.

This shadow self doesn’t just sit there: It’s constantly touched. It’s examined and judged. When we apply for a bank loan, it’s our data that determines whether or not we get it. When we try to board an airplane, it’s our data that determines how thoroughly we get searched—or whether we get to board at all. If the government wants to investigate us, they’re more likely to go through our data than they are to search our homes; for a lot of that data, they don’t even need a warrant.

Who controls our data controls our lives.

It’s true. Whoever controls our data can decide whether we can get a bank loan, on an airplane or into a country. Or what sort of discount we get from a merchant, or even how we’re treated by customer support. A potential employer can, illegally in the U.S., examine our medical data and decide whether or not to offer us a job. The police can mine our data and decide whether or not we’re a terrorist risk. If a criminal can get hold of enough of our data, he can open credit cards in our names, siphon money out of our investment accounts, even sell our property. Identity theft is the ultimate proof that control of our data means control of our life.

We need to take back our data.

Our data is a part of us. It’s intimate and personal, and we have basic rights to it. It should be protected from unwanted touch.

We need a comprehensive data privacy law. This law should protect all information about us, and not be limited merely to financial or health information. It should limit others’ ability to buy and sell our information without our knowledge and consent. It should allow us to see information about us held by others, and correct any inaccuracies we find. It should prevent the government from going after our information without judicial oversight. It should enforce data deletion, and limit data collection, where necessary. And we need more than token penalties for deliberate violations.

This is a tall order, and it will take years for us to get there. It’s easy to do nothing and let the market take over. But as we see with things like grocery store club cards and click-through privacy policies on websites, most people either don’t realize the extent their privacy is being violated or don’t have any real choice. And businesses, of course, are more than happy to collect, buy, and sell our most intimate information. But the long-term effects of this on society are toxic; we give up control of ourselves.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/21): A rebuttal.

Posted on May 20, 2008 at 1:10 PM74 Comments

Spying on Computer Monitors Off Reflective Objects

Impressive research:

At Saarland University, researchers trained a $500 telescope on a teapot near a computer monitor 5 meters away. The images are tiny but amazingly clear, professor Michael Backes told IDG.

All it took was a $500 telescope trained on a reflective object in front of the monitor. For example, a teapot yielded readable images of 12 point Word documents from a distance of 5 meters (16 feet). From 10 meters, they were able to read 18 point fonts. With a $27,500 Dobson telescope, they could get the same quality of images at 30 meters.

Here’s the paper:

Abstract

We present a novel eavesdropping technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently prevalent LCD monitors. Our technique exploits reflections of the screen’s optical emanations in various objects that one commonly finds in close proximity to the screen and uses those reflections to recover the original screen content. Such objects include eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. We have demonstrated that this attack can be successfully mounted to spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to 10 meters. Relying on more expensive equipment allowed us to conduct this attack from over 30 meters away, demonstrating that similar attacks are feasible from the other side of the street or from a close-by building. We additionally establish theoretical limitations of the attack; these limitations may help to estimate the risk that this attack can be successfully mounted in a given environment.

Posted on May 20, 2008 at 10:44 AM48 Comments

Airlines Profiting from TSA Rules

From CNN:

Before 9/11, airlines and security personnel—and I use the term “security personnel” loosely—might have let a nickname or even a maiden name on a ticket slide. No longer. If you have the wrong name on your ticket, you’re probably grounded. And there are two reasons for this: security and greed.

The Transportation Security Administration wants to be sure the same person who bought the ticket, and who was screened, is boarding the plane. But when there’s an inexact match, the airline can either charge a $100 “change” fee or force you to buy a new ticket. In an industry where every dollar counts, the exact-name rule is the government’s gift to cash-starved air carriers.

That’s the situation Gordon was confronted with, even when it was obvious that “Jan” and “Janet” were one and the same. There were suggestions that a new ticket might need to be purchased. “We didn’t let it get to that,” he recalls. Instead, he asked to speak with a supervisor who could finally fix the codes so that the ticket and passport matched up. How did all of this happen in the first place? Turns out Jan Gordon had signed up for a frequent flier account under her informal name, so when she booked an award ticket, it also used her informal—and inaccurate—name.

There are two things to get pissed off about here. One, the airlines profiting off a TSA rule. And two, a TSA rule that requires them to ignore what is obvious.

EDITED TO ADD (5/28): To add some more detail here, the rule makes absolutely no sense. If this were sensible, the TSA employee who checks the ticket against the ID would make the determination if the names were the same. Instead, the passenger is forced to go back to the airline who, for a fee, changes the name on the ticket to match the ID. This latter system is no more secure. If anything, it’s less secure. But rules are rules, so it’s what has to happen.

Posted on May 20, 2008 at 6:51 AM56 Comments

Random Number Bug in Debian Linux

This is a big deal:

On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused by the removal of the following line of code from md_rand.c

	MD_Update(&m,buf,j);
	[ .. ]
	MD_Update(&m,buf,j); /* purify complains */

These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only “random” value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.

More info, from Debian, here. And from the hacker community here. Seems that the bug was introduced in September 2006.

More analysis here. And a cartoon.

Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we’ve seen here, security flaws in random number generators are really easy to accidently create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.

Posted on May 19, 2008 at 6:07 AM88 Comments

Terrorists Attacking via Air Conditioners

From the DHS and the FBI, a great movie-plot threat:

It is possible to introduce chemical or biological agents directly into external air-intakes or internal air-circulation systems. Unless the building has carbon filters (or the equivalent), volatile chemical agents would not be stopped and would enter the building untenanted.

[…]

Other scenarios involve the use of helicopters equipped with agricultural spraying equipment to discharge large chemical or biological contaminant clouds near external or roof-mounted air intakes or ventilators.

[…]

Terrorists have considered producing a radiological dispersal device (RDD) by burning or exploding a source or sources containing radioactive material. If large quantities of easily dispersed radioactive material were released or exploded near an HVAC intake or circulation system, it is possible that targeted individuals could suffer some adverse health effects.

I’m sure glad my government is working on this stuff.

Posted on May 16, 2008 at 12:03 PM70 Comments

Crossing Borders with Laptops and PDAs

Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you’re entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.

But the US is not alone. British customs agents search laptops for pornography. And there are reports on the internet of this sort of thing happening at other borders, too. You might not like it, but it’s a fact. So how do you protect yourself?

Encrypting your entire hard drive, something you should certainly do for security in case your computer is lost or stolen, won’t work here. The border agent is likely to start this whole process with a “please type in your password”. Of course you can refuse, but the agent can search you further, detain you longer, refuse you entry into the country and otherwise ruin your day.

You’re going to have to hide your data. Set a portion of your hard drive to be encrypted with a different key – even if you also encrypt your entire hard drive – and keep your sensitive data there. Lots of programs allow you to do this. I use PGP Disk . TrueCrypt is also good, and free.

While customs agents might poke around on your laptop, they’re unlikely to find the encrypted partition. (You can make the icon invisible, for some added protection.) And if they download the contents of your hard drive to examine later, you won’t care.

Be sure to choose a strong encryption password. Details are too complicated for a quick tip, but basically anything easy to remember is easy to guess. (My advice is here.) Unfortunately, this isn’t a perfect solution. Your computer might have left a copy of the password on the disk somewhere, and (as I also describe at the above link) smart forensic software will find it.

So your best defence is to clean up your laptop. A customs agent can’t read what you don’t have. You don’t need five years’ worth of email and client data. You don’t need your old love letters and those photos (you know the ones I’m talking about). Delete everything you don’t absolutely need. And use a secure file erasure program to do it. While you’re at it, delete your browser’s cookies, cache and browsing history. It’s nobody’s business what websites you’ve visited. And turn your computer off – don’t just put it to sleep – before you go through customs; that deletes other things. Think of all this as the last thing to do before you stow your electronic devices for landing. Some companies now give their employees forensically clean laptops for travel, and have them download any sensitive data over a virtual private network once they’ve entered the country. They send any work back the same way, and delete everything again before crossing the border to go home. This is a good idea if you can do it.

If you can’t, consider putting your sensitive data on a USB drive or even a camera memory card: even 16GB cards are reasonably priced these days. Encrypt it, of course, because it’s easy to lose something that small. Slip it in your pocket, and it’s likely to remain unnoticed even if the customs agent pokes through your laptop. If someone does discover it, you can try saying: “I don’t know what’s on there. My boss told me to give it to the head of the New York office.” If you’ve chosen a strong encryption password, you won’t care if he confiscates it.

Lastly, don’t forget your phone and PDA. Customs agents can search those too: emails, your phone book, your calendar. Unfortunately, there’s nothing you can do here except delete things.

I know this all sounds like work, and that it’s easier to just ignore everything here and hope you don’t get searched. Today, the odds are in your favour. But new forensic tools are making automatic searches easier and easier, and the recent US court ruling is likely to embolden other countries. It’s better to be safe than sorry.

This essay originally appeared in The Guardian.

Some other advice here.

EDITED TO ADD (5/18): Many people have pointed out to me that I advise people to lie to a government agent. That is, of course, illegal in the U.S. and probably most other countries—and probably not the best advice for me to be on record as giving. So be sure you clear your story first with both your boss and the New York office.

Posted on May 16, 2008 at 6:10 AM111 Comments

Crypto-Gram Tenth Anniversary Issue

Ten years ago I started Crypto-Gram. It was a monthly newsletter written entirely by me. No guest columns. No advertising. Nothing but me writing about security, published the 15th of the month every month. Now, 120 issues later, none of that has changed.

I started Crypto-Gram because I had a lot to say about security, and book-length commentaries were too slow and too infrequent. Sure, I was writing the occasional column in the occasional magazine, but those were also too slow and infrequent. Crypto-Gram was supposed to be my personal voice on security, sent directly to those who wanted to read it.

I originally thought about charging for Crypto-Gram. I knew of several newsletters that funded themselves through subscription fees, and figured that a couple of hundred subscribers at $150 or so would sustain itself very nicely. I don’t remember why I decided not to—did someone convince me, or did I figure it out myself—but it was easily the smartest decision I made about this newsletter. If I’d charged money for the thing, no one would have read it. Since I didn’t, lots of people subscribed.

There were 457 subscribers by the end of the first day. After that, circulation climbed slowly and steadily. Here are the totals for May of each year:

1999 15964
2000 33827
2001 45832
2002 58046
2003 66368
2004 75907
2005 83835
2006 87839
2007 92488
2008 98618

Those numbers hide a lot of readers, like the tens of thousands that read Crypto-Gram via the Web. I also know of people that forward my newsletter to hundreds of others. There are many foreign translations that have their own subscription list. These days I estimate that I have about 25,000 newsletter readers not included in those numbers.

I have no idea where the initial batch of subscribers came from. Nor do I remember how people subscribed before the webpage form was done. I do remember my first big burst of subscribers, though. It was following my special issue after 9/11. I wrote something short for the September issue, but I found that I couldn’t stop writing. Two weeks later, I published a special issue on the terrorist attacks. Readers forwarded that issue again and again, and I ended up with many new subscribers as a result.

Reader comments began earlier, in December 1998. I found I was getting some really intelligent comments from my readers—especially those that disagreed with me—and I wanted to publish some of them. Some of the disagreements were nasty. In October 1998, I started a column called “The Doghouse,” where I made fun of snake-oil security products. Some of the companies didn’t like being so characterized, and sent me threatening legal letters.

Turns out that publishing those sorts of threats as letters to Crypto-Gram was the best defense, even though my lawyers always discouraged it. None of these incidents ever went past the threatening stage, even though court papers were occasionally filed.

Over the years, Crypto-Gram’s focus has changed. Initially, it was all cryptography. Then, more computer and network security. Then—especially after 9/11—more general security: terrorism, airplanes, ID cards, voting machines, and so on. And now, more economics and psychology of security. My career has been a progression from the specific to the general, and Crypto-Gram has generalized to reflect that.

The next big change to Crypto-Gram came in October 2004. I had been reading about blogging, and wondered for several months if switching Crypto-Gram over to blog format was a good idea or not. Again, it was about speed and frequency. I found that others were commenting on security stories faster, and that by the time Crypto-Gram would come out, people had already linked to other stories. A blog would allow me to get my commentary out even faster, and to be part of the initial discussions.

I went back and forth. Several people advised me to change, that blogging was the format of the future. I was skeptical, preferring to push my newsletter into my readers’ mailboxes every month. I sent a survey to 400 of my subscribers—200 random subscribers and 200 people who had subscribed within the past month—asking. My eventual solution was the second smartest thing I did with this newsletter: to do both.

The Schneier on Security blog started out as Crypto-Gram entries, delivered daily. And the early blog entries looked a lot like Crypto-Gram articles, with links at the end. Over the following months I learned more about the blogging style, and the entries started looking more like blog entries. Now the blog is primary, and on the 15th of every month I take the previous month’s blog entries and reconfigure them into Crypto-Gram format. Even today, most readers prefer to receive Crypto-Gram in their e-mail box every month—even if they also read the blog online.

These days, I like both. I like the immediacy of the blog, and I like the e-mail format of Crypto-Gram. And even after ten years, I still like the writing.

People often ask me where I find the time to do all of that writing. It’s an odd question for me, because it’s what I enjoy doing. I find time at home, on airplanes, in hotel rooms, everywhere. Writing isn’t a chore—okay, maybe sometimes it is—it’s something that relaxes me. I enjoy putting my ideas down in a coherent narrative flow. And there’s nothing that pleases me more than the fact that people read it.

The best fan mail I get from a reader says something like: “You changed the way I think.” That’s what I want to do. I want to change the way you think about security. I want to change the way you think about threats, and risk, and trade-offs, about security products and services, about security rhetoric in politics. It matters less if you agree with me or disagree, only that you’re thinking differently.

Thank you. Thank you on this 10th anniversary issue. Thank you, long-time readers. Thank you, new readers. Thank you for continuing to read what I have to write. This is still a lot of fun—and interesting and thought provoking—for me. I hope it continues to be interesting, thought provoking, and fun for you.

Posted on May 15, 2008 at 11:13 AM48 Comments

Third Annual Movie-Plot Threat Contest Winner

On April 7—seven days late—I announced the Third Annual Movie-Plot Threat Contest:

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

[…]

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

On May 7, I posted five semi-finalists out of the 327 blog comments:

Sadly, two of those five was above the 150-word limit. Out of the three remaining, I (with the help of my readers) have chosen a winner.

Presenting, the winner of the Third Annual Movie Plot Threat Contest, Aaron Massey:

Tommy Tester Toothpaste Strips:

Many Americans were shocked to hear the results of the research trials regarding heavy metals and toothpaste conducted by the New England Journal of Medicine, which FDA is only now attempting to confirm. This latest scare comes after hundreds of deaths were linked to toothpaste contaminated with diethylene glycol, a potentially dangerous chemical used in antifreeze.

In light of this continuing health risk, Hamilton Health Labs is proud to announce Tommy Tester Toothpaste Strips! Just apply a dab of toothpaste from a fresh tube onto the strip and let it rest for 3 minutes. It’s just that easy! If the strip turns blue, rest assured that your entire tube of toothpaste is safe. However, if the strip turns pink, dispose of the toothpaste immediately and call the FDA health emergency number at 301-443-1240.

Do not let your family become a statistic when the solution is only $2.95!

Aaron wins, well, nothing really, except the fame and glory afforded by this blog. So give him some fame and glory. Congratulations.

Posted on May 15, 2008 at 6:24 AM29 Comments

The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes—mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent—or protect against—those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society—in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities—again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible—and more and less legal—ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AM43 Comments

Interesting Microsoft Patent Application

Guardian Angel:

An intelligent personalized agent monitors, regulates, and advises a user in decision-making processes for efficiency or safety concerns. The agent monitors an environment and present characteristics of a user and analyzes such information in view of stored preferences specific to one of multiple profiles of the user. Based on the analysis, the agent can suggest or automatically implement a solution to a given issue or problem. In addition, the agent can identify another potential issue that requires attention and suggests or implements action accordingly. Furthermore, the agent can communicate with other users or devices by providing and acquiring information to assist in future decisions. All aspects of environment observation, decision assistance, and external communication can be flexibly limited or allowed as desired by the user.

Note that Bill Gates and Ray Ozzie are co-inventors.

Posted on May 13, 2008 at 7:05 AM46 Comments

Terrorism as a Tax

Definitely a good way to look at it:

Fear, in other words, is a tax, and al-Qaeda and its ilk have done better at extracting it from Americans than the Internal Revenue Service. Think about the extra half-hour millions of airline passengers waste standing in security lines; the annual cost in lost work hours runs into the billions. Add to that the freight delays at borders, ports and airports, the cost of checking money transfers as well as goods in transit, the wages for beefed-up security forces around the world. And that doesn’t even attempt to put a price tag on the compression of civil liberties or the loss of human dignity from being groped in full public view by Transportation Security Administration personnel at the airport or from having to walk barefoot through the metal detector, holding up your beltless pants. This global transaction tax represents the most significant victory of Terror International to date.

The new fear tax falls most heavily on the United States. Last November, the Commerce Department reported a 17 percent decline in overseas travel to the United States between Sept. 11, 2001, and 2006. (There are no firm figures for 2007 yet, but there seems to have been an uptick.) That slump has cost the country $94 billion in lost tourist spending, nearly 200,000 jobs and $16 billion in forgone tax revenue—and all while the dollar has kept dropping.

Why? The journal Tourism Economics gives the predictable answer: “The perception that U.S. visa and entry policies do not welcome international visitors is the largest factor in the decline of overseas travelers.” Two-thirds of survey respondents worried about being detained for hours because of a misstatement to immigration officials. And here is the ultimate irony: “More respondents were worried about U.S. immigration officials (70 percent) than about crime or terrorism (54 percent) when considering a trip to the country.”

In Beyond Fear I wrote:

Security is a tax on the honest.

If it weren’t for attackers, our lives would be a whole lot easier. In a world where everyone was completely honorable and law-abiding all of the time, everything we bought and did would be cheaper. We wouldn’t have to pay for door locks, police departments, or militaries. There would be no security countermeasures, because people would never consider going where they were not allowed to go or doing what they were not allowed to do. Fraud would not be a problem, because no one would commit fraud. Nor would anyone commit burglary, murder, or terrorism. We wouldn’t have to modify our behavior based on security risks, because there would be none.

But that’s not the world we live in. Security permeates everything we do and supports our society in innumerable ways. It’s there when we wake up in the morning, when we eat our meals, when we’re at work, and when we’re with our families. It’s embedded in our wallets and the global financial network, in the doors of our homes and the border crossings of our countries, in our conversations and the publications we read. We constantly make security trade-offs, whether we’re conscious of them or not: large and small, personal and social. Many more security trade-offs are imposed on us from outside: by governments, by the marketplace, by technology, and by social norms. Security is a part of our world, just as it is part of the world of every other living thing. It has always been a part, and it always will be.

Posted on May 12, 2008 at 6:29 AM82 Comments

Cell Phone Spying

A handy guide:

A service called World Tracker lets you use data from cell phone towers and GPS systems to pinpoint anyone’s exact whereabouts, any time—as long as they’ve got their phone on them.

All you have to do is log on to the web site and enter the target phone number. The site sends a single text message to the phone that requires one response for confirmation. Once the response is sent, you are locked in to their location and can track them step-by-step. The response is only required the first time the phone is contacted, so you can imagine how easily it could be handled without the phone’s owner even knowing.

Once connected, the service shows you the exact location of the phone by the minute, conveniently pinpointed on a Google Map. So far, the service is only available in the UK, but the company has indicated plans to expand its service to other countries soon.

[…]

Dozens of programs are available that’ll turn any cell phone into a high-tech, long-range listening device. And the scariest part? They run virtually undetectable to the average eye.

Take, for example, Flexispy. The service promises to let you “catch cheating wives or cheating husbands” and even “bug meeting rooms.” Its tools use a phone’s microphone to let you hear essentially any conversations within earshot. Once the program is installed, all you have to do is dial a number to tap into the phone’s mic and hear everything going on. The phone won’t even ring, and its owner will have no idea you are virtually there at his side.

Posted on May 9, 2008 at 6:27 AM

Tourists, Not Terrorists

Remember the two men who were exhibiting “unusual behavior” on a Washington-state ferry last summer?

The agency’s Seattle field office, along with the Washington Joint Analytical Center, was still seeking the men’s identities and whereabouts Wednesday as ferry service was temporarily shutdown when a suspicious package was found in a ferry bathroom and taken away by authorities.

“We had various independent reports from passengers and ferry employees that these two guys were engaging in what they described as unusual activities on the ferries,” Special Agent Robbie Burroughs, a spokeswoman for the FBI in Washington state, told FOXNews.com.

“They felt that these guys were showing an undue interest in the boat itself, in the layout, the workers and the terminal, and it caused them enough concern that they contacted law enforcement about it,” she told FOXNews.com.

The two were photographed by a ferry employee about a month ago, and those photographs were distributed to ferry employees three weeks ago by local law enforcement.

Turns out they were tourists, not terrorists:

Turns out the men, both citizens of a European Union nation, were captivated by the car-carrying capacity of local ferries.

“Where these gentlemen live, they don’t have vehicle ferries. They were fascinated that a ferry could hold that many cars and wanted to show folks back home,” FBI Special Agent Robbie Burroughs said Monday.

[…]

Two weeks ago, the men appeared at a U.S. Embassy and identified themselves as the men in the photo released to the media in August, a couple of weeks after they took a ferry from Seattle to Vashon Island during a business trip, Burroughs said.

They came forward because they worried they’d be arrested if they traveled to the U.S. and so provided proof of their identities, employment and the reason for their July trip to Seattle, according to the FBI.

Posted on May 8, 2008 at 7:32 AM53 Comments

Third Annual Movie-Plot Threat Contest Semi-Finalists

A month ago I announced the Third Annual Movie-Plot Threat Contest:

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

[…]

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

Submissions are in. The blog entry has 327 comments. I’ve read them all, and here are the semi-finalists:

It’s not in the running, but reader “False Data” deserves special mention for his Safe-T-Nav, a GPS system that detects high crime zones. It would be a semi-finalist, but it already exists.

Cast your vote; I’ll announce the winner on the 15th.

Posted on May 7, 2008 at 2:33 PM101 Comments

Al Qaeda Threat Overrated

Seems obvious to me:

“I reject the notion that Al Qaeda is waiting for ‘the big one’ or holding back an attack,” Sheehan writes. “A terrorist cell capable of attacking doesn’t sit and wait for some more opportune moment. It’s not their style, nor is it in the best interest of their operational security. Delaying an attack gives law enforcement more time to detect a plot or penetrate the organization.”

Terrorism is not about standing armies, mass movements, riots in the streets or even palace coups. It’s about tiny groups that want to make a big bang. So you keep tracking cells and potential cells, and when you find them you destroy them. After Spanish police cornered leading members of the group that attacked trains in Madrid in 2004, they blew themselves up. The threat in Spain declined dramatically.

Indonesia is another case Sheehan and I talked about. Several high-profile associates of bin Laden were nailed there in the two years after 9/11, then sent off to secret CIA prisons for interrogation. The suspects are now at Guantánamo. But suicide bombings continued until police using forensic evidence—pieces of car bombs and pieces of the suicide bombers—tracked down Dr. Azahari bin Husin, “the Demolition Man,” and the little group around him. In a November 2005 shootout the cops killed Dr. Azahari and crushed his cell. After that such attacks in Indonesia stopped.

The drive to obliterate the remaining hives of Al Qaeda training activity along the Afghanistan-Pakistan frontier and those that developed in some corners of Iraq after the U.S. invasion in 2003 needs to continue, says Sheehan. It’s especially important to keep wanna-be jihadists in the West from joining with more experienced fighters who can give them hands-on weapons and explosives training. When left to their own devices, as it were, most homegrown terrorists can’t cut it. For example, on July 7, 2005, four bombers blew themselves up on public transport in London, killing 56 people. Two of those bombers had trained in Pakistan. Another cell tried to do the same thing two weeks later, but its members had less foreign training, or none. All the bombs were duds.

[…]

Sir David Omand, who used to head Britain’s version of the National Security Agency and oversaw its entire intelligence establishment from the Cabinet Office earlier this decade, described terrorism as “one corner” of the global security threat posed by weapons proliferation and political instability. That in turn is only one of three major dangers facing the world over the next few years. The others are the deteriorating environment and a meltdown of the global economy. Putting terrorism in perspective, said Sir David, “leads naturally to a risk management approach, which is very different from what we’ve heard from Washington these last few years, which is to ‘eliminate the threat’.”

Yet when I asked the panelists at the forum if Al Qaeda has been overrated, suggesting as Sheehan does that most of its recruits are bunglers, all shook their heads. Nobody wants to say such a thing on the record, in case there’s another attack tomorrow and their remarks get quoted back to them.

That’s part of what makes Sheehan so refreshing. He knows there’s a big risk that he’ll be misinterpreted; he’ll be called soft on terror by ass-covering bureaucrats, breathless reporters and fear-peddling politicians. And yet he charges ahead. He expects another attack sometime, somewhere. He hopes it won’t be made to seem more apocalyptic than it is. “Don’t overhype it, because that’s what Al Qaeda wants you to do. Terrorism is about psychology.” In the meantime, said Sheehan, finishing his fruit juice, “the relentless 24/7 job for people like me is to find and crush those guys.”

I’ve ordered Sheehan’s book, Crush the Cell: How to Defeat Terrorism Without Terrorizing Ourselves.

Posted on May 7, 2008 at 12:56 PM19 Comments

London's Cameras Don't Reduce Crime

News here and here:

Massive investment in CCTV cameras to prevent crime in the UK has failed to have a significant impact, despite billions of pounds spent on the new technology, a senior police officer piloting a new database has warned. Only 3% of street robberies in London were solved using CCTV images, despite the fact that Britain has more security cameras than any other country in Europe.

[…]

Use of CCTV images for court evidence has so far been very poor, according to Detective Chief Inspector Mick Neville, the officer in charge of the Metropolitan police unit. “CCTV was originally seen as a preventative measure,” Neville told the Security Document World Conference in London. “Billions of pounds has been spent on kit, but no thought has gone into how the police are going to use the images and how they will be used in court. It’s been an utter fiasco: only 3% of crimes were solved by CCTV. There’s no fear of CCTV. Why don’t people fear it? [They think] the cameras are not working.”

This is, of course is absolutely no surprise.

Posted on May 7, 2008 at 6:53 AM36 Comments

Dual-Use Technologies and the Equities Issue

On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and—in many cases—shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace. But nearly a year later, evidence that the Russian government was involved in the denial-of-service attacks still hasn’t emerged. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were pissed off over the statue incident.

You know you’ve got a problem when you can’t tell a hostile attack by another nation from bored kids with an axe to grind.

Separating cyberwar, cyberterrorism and cybercrime isn’t easy; these days you need a scorecard to tell the difference. It’s not just that it’s hard to trace people in cyberspace, it’s that military and civilian attacks—and defenses—look the same.

The traditional term for technology the military shares with civilians is “dual use.” Unlike hand grenades and tanks and missile targeting systems, dual-use technologies have both military and civilian applications. Dual-use technologies used to be exceptions; even things you’d expect to be dual use, like radar systems and toilets, were designed differently for the military. But today, almost all information technology is dual use. We both use the same operating systems, the same networking protocols, the same applications, and even the same security software.

And attack technologies are the same. The recent spurt of targeted hacks against U.S. military networks, commonly attributed to China, exploit the same vulnerabilities and use the same techniques as criminal attacks against corporate networks. Internet worms make the jump to classified military networks in less than 24 hours, even if those networks are physically separate. The Navy Cyber Defense Operations Command uses the same tools against the same threats as any large corporation.

Because attackers and defenders use the same IT technology, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows: When a military discovers a vulnerability in a dual-use technology, they can do one of two things. They can alert the manufacturer and fix the vulnerability, thereby protecting both the good guys and the bad guys. Or they can keep quiet about the vulnerability and not tell anyone, thereby leaving the good guys insecure but also leaving the bad guys insecure.

The equities issue has long been hotly debated inside the NSA. Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.

In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.

So now we’re seeing the NSA help secure Windows Vista and releasing their own version of Linux. The DHS, meanwhile, is funding a project to secure popular open source software packages, and across the Atlantic the UK’s GCHQ is finding bugs in PGPDisk and reporting them back to the company. (NSA is rumored to be doing the same thing with BitLocker.)

I’m in favor of this trend, because my security improves for free. Whenever the NSA finds a security problem and gets the vendor to fix it, our security gets better. It’s a side-benefit of dual-use technologies.

But I want governments to do more. I want them to use their buying power to improve my security. I want them to offer countrywide contracts for software, both security and non-security, that have explicit security requirements. If these contracts are big enough, companies will work to modify their products to meet those requirements. And again, we all benefit from the security improvements.

The only example of this model I know about is a U.S. government-wide procurement competition for full-disk encryption, but this can certainly be done with firewalls, intrusion detection systems, databases, networking hardware, even operating systems.

When it comes to IT technologies, the equities issue should be a no-brainer. The good uses of our common hardware, software, operating systems, network protocols, and everything else vastly outweigh the bad uses. It’s time that the government used its immense knowledge and experience, as well as its buying power, to improve cybersecurity for all of us.

This essay originally appeared on Wired.com.

Posted on May 6, 2008 at 5:17 AM34 Comments

The Doghouse: Passwordsafe.com

This isn’t my Password Safe. This is PasswordSafe.com. Password Safe is an open-source application that lives on your computer and encrypts your passwords. PasswordSafe.com lets you store your passwords on their server. They promise not to look at them.

Can I trust PasswordSafe?

As we mentioned, pretty much every function is automated, no-one here ever sees your information as it’s all taken care of by the programs and encrypted into the database. Again we’ll remind you, we do not recommend you store sensitive information at PasswordSafe. In house, we’ve used this service for many sites, banner programs, affiliate programs, free email services and much more.

Posted on May 5, 2008 at 6:37 AM70 Comments

Schneier Interviews

Two weeks ago I was interviewed on Dutch radio. The introduction and questions are in Dutch, but my answers are in English.

Three weeks ago I was interviewed on Anti War Radio. It was an odd interview, starting from my essay “Portrait of the Modern Terrorist as an Idiot” and then meandering into the role of government versus corporations in security.

This written Q&A was conducted on video even though it is presented as text, so it doesn’t read as well as the ones I’ve done via e-mail. This is a video interview from the RSA Conference.

And finally, three video interviews, one from the U.K. and two from Australia.

I’m not trying to brag. It’s just easier for me if these links are all in one place so I can search for them later.

Posted on May 2, 2008 at 1:53 PM6 Comments

Protect Your Macintosh Copies Available

In 1994, I published my second book, Protect Your Macintosh. You’ve probably never heard of it; it died a quiet and lonely death.

Going through some boxes, I found a dozen copies of the book: first and, I think, only printing. I’m willing to send one to anyone who wants one for $5 postage. (That’s in the U.S. If you’re elsewhere, we’ll figure out postage.) Please let me know via e-mail if you’re interested.

And I can assure you that, fourteen years later, there’s absolutely nothing of practical value in the book. This offer should only interest collectors. And even them, not that much.

I also have seven copies of my third book, E-Mail Security, from 1995, which also has nothing in it of any practical value anymore. Again, $5 for postage.

EDITED TO ADD (5/3): Sold out; sorry.

Posted on May 2, 2008 at 11:12 AM30 Comments

Sky Marshals on the No-Fly List

If this weren’t so sad, it would be funny:

The problem with federal air marshals (FAM) names matching those of suspected terrorists on the no-fly list has persisted for years, say air marshals familiar with the situation.

One air marshal said it has been “a major problem, where guys are denied boarding by the airline.”

“In some cases, planes have departed without any coverage because the airline employees were adamant they would not fly,” the air marshal said. “I’ve seen guys actually being denied boarding.”

A second air marshal says one agent “has been getting harassed for six years because his exact name is on the no-fly list.”

Another article.

Seriously—if these people can’t get their names off the list, what hope do the rest of us have? Not that the no-fly list has any real value, anyway.

Posted on May 2, 2008 at 7:14 AM54 Comments

What to Worry About

Snarky, but basically correct:

3. Male Family Members and Friends (Especially if they are drunk and you are young foreign born.)

It’s the strange man we fear—the footsteps in the dark—the unlocked back door. The correct part of the constant American crime fantasy is that it is usually a man hunting us. Approximately 90% of all murders are committed males. But stop worrying so much about strangers you don’t know and think about the strangers you know. Too often, we invite our predators in and offer them a drink. The leading cause of death for black women from 18-45 is domestic violence. The New York Health Department found that lovers committed 60% of all murders of women. Young foreign-born women were 87% more likely to be killed by a lover than a stranger. Females are much more likely to be victimized by someone they know. Strangers committed about 14% of all murders in 2002 while a family member or an acquaintance committed 43%. Family members commit two-thirds of murders of children under five. Two-thirds of violent crimes committed by acquaintances involved alcohol. Think about that at your next dinner party.

3. People of Your Own So-called Race

An extension of our narcissism is the belief that people who are like us are sane. But it’s the people who are most like us who are mostly likely to kill us. Blacks murdered more than 90% black murder victims. White criminals murdered more than 80% of white murder victims. I’m not saying strangers are safer than the people we know; I’m just saying they might be.

Posted on May 1, 2008 at 2:43 PM28 Comments

Heroin vs. Terrorism

A nice essay on security trade-offs:

The mismatch between the resources devoted to fighting organised crime compared with those directed towards counter-terrorism is unnerving. Government says that there are millions of pounds in police budgets that should be devoted to dealing with organised crime. In truth, only a handful of British police forces know how to tackle it. The ridiculous Victorian patchwork of shire constabularies means that most are too small to tackle serious criminality that doesn’t recognise country, never mind county, borders.

The Serious Organised Crime Agency (Soca) was launched two years ago as Britain’s equivalent of the FBI, with the remit of taking on the Mr Bigs of international crime. But ministers have trimmed Soca’s budget this year. Far from expanding to counter the ever-growing threat, the agency is shrinking and there is smouldering unhappiness in the ranks. Soca’s budget for taking the fight to the cartels and syndicates is £400 million—exactly the same amount that the Government intends to spend overseas in countries such as Pakistan on workshops and seminars to counter al-Qaeda’s ideology.

Posted on May 1, 2008 at 6:56 AM21 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.