Blog: April 2007 Archives

Attackers Exploiting Security Procedures

In East Belfast, burglars called in a bomb threat. Residents evacuated their homes, and then the burglars proceeded to rob eight empty houses on the block.

I've written about this sort of thing before: sometimes security procedures themselves can be exploited by attackers. It was Step 4 of my "five-step process" from Beyond Fear (pages 14-15). A national ID card make identity theft more lucrative; forcing people to remove their laptops at airport security checkpoints makes laptop theft more common.

Moral: you can't just focus on one threat. You need to look at the broad spectrum of threats, and pay attention to how security against one affects the others.

Posted on April 30, 2007 at 12:27 PM29 Comments

Sponsor-Only Security at the 2012 London Olympics

If you want your security technology to be considered for the London Olympics, you have to be a major sponsor of the event.

...he casually revealed that because neither of these companies was a ‘major sponsor’ of the Olympics their technology could not be used.

Yes, you read that right, as far as the technology behind the security of the London Olympic Games is concerned best of breed and suitability for purpose do not come into, paying a large amount of money to the International Olympic Committee does.

I have repeatedly said that security is generally only part of a larger context, but this borders on ridiculous.

Posted on April 30, 2007 at 5:55 AM37 Comments

Schneier Talk at the British Computer Society

The MP3 of my March 21 talk at the British Computer Society -- on information security trends and economic considerations -- is on the Internet.

EDITED TO ADD (4/30): Ogg file here.

Posted on April 28, 2007 at 2:05 PM12 Comments

Richard Clarke on the "Puppy Dog" Theory of Terrorism

Excellent op ed, by someone who actually knows about this stuff:

How is this odd terrorist puppy dog behavior supposed to work? The President must believe that terrorists are playing by some odd rules of chivalry. Would this be the "only one slaughter ground at a time" rule of terrorism?

Of course, nothing about our being "over there" in any way prevents terrorists from coming here. Quite the opposite, the evidence is overwhelming that our presence provides motivation for people throughout the Arab world to become anti-American terrorists.

Some 100,000 Iraqis, probably more, have been killed since our invasion. They have parents, children, cousins and fellow tribal clan members who have pledged revenge no matter how long it takes. For many, that revenge is focused on America.

Posted on April 27, 2007 at 11:54 AM60 Comments

Commentary on Vista Security and the Microsoft Monopoly

This is right:

As Dan Geer has been saying for years, Microsoft has a bit of a problem. Either it stonewalls and pretends there is no security problem, which is what Vista does, by taking over your computer to force patches (and DRM) down its throat. Or you actually change the basic design and produce a secure operating system, which risks people wondering why they're sticking with Windows and Microsoft, then? It turns out the former course may also result in the latter result:

If you fit Microsoft's somewhat convoluted definition of poor, it still wants to lock you in, you might get rich enough to afford the full-priced stuff someday. It is at a dangerous crossroads, if its software bumps up the price of a computer by 100 per cent, people might look to alternatives.

That means no MeII DRM infection lock in, no mass migration to the newer Office obfuscated and patented file formats, and worse yet, people might utter the W word. Yes, you guessed it, 'why'. People might ask why they are sticking with the MS lock in, and at that point, it is in deep trouble.

Monopolies eventually overreach themselves and die. Maybe it's finally Microsoft's time to die. That would decrease the risk to the rest of us.

Posted on April 27, 2007 at 7:03 AM72 Comments

Triggering Bombs by Remote Key Entry Devices

I regularly read articles about terrorists using cell phones to trigger bombs. The Thai government seems to be particularly worried about this; two years ago I blogged about a particularly bizarre movie-plot threat along these lines. And last year I blogged about the cell phone network being restricted after the Mumbai terrorist bombings.

Efforts to restrict cell phone usage because of this threat are ridiculous. It's a perfect example of a "movie-plot threat": by focusing on the specfics of a particular tactic rather than the broad threat, we simply force the bad guys to modify their tactics. Lots of money spent: no security gained.

And that's exactly what happened in Thailand:

Authorities said yesterday that police are looking for 40 Daihatsu keyless remote entry devices, some of which they believe were used to set off recent explosions in the deep South.

Militants who have in the past used mobile phones to set off bombs are being forced to change their detonation methods as security forces continue to block mobile phone signals while carrying out security missions, preventing them from carrying out their attacks.


Police found one of the Daihatsu keys near a blast site in Yala on April 13. It is thought the bomber dropped it while fleeing the scene. The key had been modified so its signal covered a longer distance, police said.

Posted on April 26, 2007 at 1:28 PM45 Comments

Recognizing "Hinky" vs. Citizen Informants

On the subject of people noticing and reporting suspicious actions, I have been espousing two views that some find contradictory. One, we are all safer if police, guards, security screeners, and the like ignore traditional profiling and instead pay attention to people acting hinky: not right. And two, if we encourage people to contact the authorities every time they see something suspicious, we're going to waste our time chasing false alarms: foreigners whose customs are different, people who are disliked by someone, and so on.

The key difference is expertise. People trained to be alert for something hinky will do much better than any profiler, but people who have no idea what to look for will do no better than random.

Here's a story that illustrates this: Last week, a student at the Rochester Institute of Technology was arrested with two illegal assault weapons and 320 rounds of ammunition in his dorm room and car:

The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg's dorm room. The door was shut, but the worker heard the all-too-familiar racking sound of a weapon, said the center's director Bill Gunther.

Notice how expertise made the difference. The "conference center worker" had the right knowledge to recognize the sound and to understand that it was out of place in the environment he heard it. He wasn't primed to be on the lookout for suspicious people and things; his trained awareness kicked in automatically. He recognized hinky, and he acted on that recognition. A random person simply can't do that; he won't recognize hinky when he sees it. He'll report imams for praying, a neighbor he's pissed at, or people at random. He'll see an English professor recycling paper, and report a Middle-Eastern-looking man leaving a box on sidewalk.

We all have some experience with this. Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can't fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky -- in our particular domain of expertise.

Good security people have the knowledge, skill, and experience to do that in security situations. It's the difference between a good security person and an amateur.

This is why behavioral assessment profiling is a good idea, while the Terrorist Information and Prevention System (TIPS) isn't. This is why training truckers to look out for suspicious things on the highways is a good idea, while a vague list of things to watch out for isn't. It's why this Israeli driver recognized a passenger as a suicide bomber, while an American driver probably wouldn't.

This kind of thing isn't easy to train. (Much has been written about it, though; Malcolm Gladwell's Blink discusses this in detail.) You can't learn it from watching a seven-minute video. But the more we focus on this -- the more we stop wasting our airport security resources on screeners who confiscate rocks and snow globes, and instead focus them on well-trained screeners walking through the airport looking for hinky -- the more secure we will be.

EDITED TO ADD (4/26): Jim Harper makes an important clarification.

Posted on April 26, 2007 at 5:43 AM72 Comments

English Professor Reported for Recycling Paper While Looking Middle Eastern

This is just awful:

Because of my recycling, the bomb squad came, then the state police. Because of my recycling, buildings were evacuated, classes were canceled, the campus was closed. No. Not because of my recycling. Because of my dark body. No. Not even that. Because of his fear. Because of the way he saw me. Because of the culture of fear, mistrust, hatred and suspicion that is carefully cultivated in the media, by the government, by people who claim to want to keep us "safe."


What does that community mean to me, a person who has to walk by the ROTC offices every day on my way to my own office just down the hall -- who was watched, noted and reported, all in a day's work? Today, we gave in willingly and wholeheartedly to a culture of fear and blaming and profiling. It is deemed perfectly appropriate behavior to spy on one another and police one another and report on one another. Such behaviors exist most strongly in closed, undemocratic and fascist societies.

Posted on April 25, 2007 at 3:02 PM98 Comments

A Rant from a Cop

People use policemen as props in their personal disputes:

Noon, its 59 degrees and I get a call from a guy whose neighbor's dog has been left in a car. I get there, the windows are cracked, and the dog has only been in there 20 minutes. It's 59 Degrees! It's not summer and if it were the dead of winter I'd say the car is a $20,000 dog house. But it turns out this guy has a running dispute with his neighbor so guess who he calls to irritate the guy a little more? Me. When I go to leave, the asshole that called this in yells, "hey, aren't you gonna do anything?" I explain why I am not and he says "great, I'm writing a letter to the paper" Holy shit. Now I'm the bad guy because I didn't embarrass your target enough for you? Grow the hell up.

When the police implement programs to let ordinary citizens report suspected terrorists, this is the kind of thing that will result.

Posted on April 25, 2007 at 1:08 PM36 Comments

Stage Weapons Banned

I wish I could make a joke about security theater at the theater, but this is just basic stupidity:

Dean of Student Affairs Betty Trachtenberg has limited the use of stage weapons in theatrical productions.

Students involved in this weekend's production of "Red Noses" said they first learned of the new rules on Thursday morning, the same day the show was slated to open. They were subsequently forced to alter many of the scenes by swapping more realistic-looking stage swords for wooden ones, a change that many students said was neither a necessary nor a useful response to the tragedy at Virginia Tech.

According to students involved in the production, Trachtenberg has banned the use of some stage weapons in all of the University's theatrical productions.

Not only does this not make anyone safer, it doesn't even make anyone feel safer.

EDITED TO ADD (4/25): The order has been rescinded, without any demonstration of common sense:

"I think people should start thinking about other people rather than trying to feel sorry for themselves and thinking that the administration is trying to thwart their creativity," Trachtenberg said. "They're not using their own intelligence. … We have to think of the people who might be affected by seeing real-life weapons."

Posted on April 25, 2007 at 7:32 AM64 Comments

Top 10 Internet Crimes of 2006

According to the Internet Crime Complaint Center and reported in U.S. News and World Report, auction fraud and non-delivery of items purchased are far and away the most common Internet crimes. Identity theft is way down near the bottom.

Although the number of complaints last year­207,492­fell by 10 percent, the overall losses hit a record $198 million. By far the most reported crime: Internet auction fraud, garnering 45 percent of all complaints. Also big was nondelivery of merchandise or payment, which notched second at 19 percent. The biggest money losers: those omnipresent Nigerian scam letters, which fleeced victims on average of $5,100 ­followed by check fraud at $3,744 and investment fraud at $2,694.


The feds caution that these figures don't represent a scientific sample of just how much Net crime is out there. They note, for example, that the high number of auction fraud complaints is due, in part, to eBay and other big E-commerce outfits offering customers direct links to the IC3 website. And it's tough to measure what may be the Web's biggest scourge, child porn, simply by complaints. Still, the survey is a useful snapshot, even if it tells us what we already know: that the Internet, like the rest of life, is full of bad guys. Caveat emptor.

Posted on April 24, 2007 at 12:25 PM21 Comments

How Australian Authorities Respond to Potential Terrorists

Watch the video of how the Australian authorities react when someone -- dressed either as an American or Arab tourist -- films the Sydney Harbor Bridge and a nuclear reactor.

The synopsis: The Arab is intercepted within three minutes both times, while the U.S. tourist is given instructions on how to get inside the nuclear facility.

Moral for terrorists: dress like an American.

By the way, Lucas Heights is a research reactor. It produces medical isotopes and performs research, and doesn't produce power.

Posted on April 24, 2007 at 7:12 AM35 Comments

Hacking the U.S. Post Office

This is clever:

Many USA ecommerce shops don’t send their goods to Russia or to the countries of the Ex-USSR.

Some shops send but delivery costs differ greatly from the homeland ones, they are usually much bigger.

So what did some Russians invented? They got a way to fool the delivery.

It’s no secret that many bigger shops use electronic systems processing orders. So in order to see if this address is in USA or Canada it uses ZIP code, state or province name and words "USA" or "CANADA".

So what was possible to do is to put totally Russian address in the order delivery form, like: Moscow, Lenin St. 20, Russia in the address fields, usually there is a plenty of space to enter long things like this, and in the field country they put Canada in the field ZIP code ­ Canadian zip code.

What happens next? The parcel travels to Canada, to the area to which the specified ZIP code belongs and there postal workers just see it’s not a Canadian address but Russian. They consider it to be some sort of mistake and forward it further, to Russia.

Posted on April 23, 2007 at 1:00 PM43 Comments

Keystroke Biometrics

This sounds like a good idea. From a news article:

The technology, which measures the time for which keys are held down, as well as the length between strokes, takes advantage of the fact that most computer users evolve a method of typing which is both consistent and idiosyncratic ­ especially for words used frequently such as a user name and password.

When registering, the user types his or her details nine times so that the software can generate a profile. Future login attempts are measured against the profile which, the company claims, can recognise the same user’s keystrokes with 99 per cent accuracy, using what is known as a "behavioural biometric."

I wouldn't want to automatically block users unless they get this right, and the false-positive/false-negative ratio would have to be jiggered properly, but if they can get it working right, it's an extra layer of authentication for "free."

Another news article. Slashdot thread.

Posted on April 23, 2007 at 6:49 AM64 Comments

Least Risk Bomb Location

This fascinating tidbit is from Aviation Week and Space Technology (April 9, 2007, p. 21), in David Bond's "Washington Outlook" column (unfortunately, not online).

Need to Know

Security and society's litigious bent combine to make airlines unsuited for figuring out the best place to put a suspected explosive device discovered during a flight, AirTran Airways tells the FAA (Federal Aviation Administration). Commenting on a proposed rule that would require, among other things, designation of a "least risk bomb location" (LRBL) -- the place on an aircraft where a bomb would do the least damage if it exploded -- AirTran engineering director Rick Shideler says it's hard for airlines to get aircraft design information related to such a location because of agreements between manufacturers and the Homeland Security Department. The carrier got LRBL information for its 717s and 737s from Boeing but can't find out why the locations were chosen, "or even who specifically picked them," because of liability laws.

I'd never heard of an LRBL before, but the FAA has public proposed guidelines on them. Apparently flight crews are trained to stash suspicious objects there.

But liability seems to be getting in the way of security and common sense here. It seems reasonable that an airline's engineering director should be allowed to understand the technical reasoning behind the choice of LRBL, and maybe even give the manufacturer feedback on it.

EDITED TO ADD (4/21): Comment (below) from a pilot: The designation of a "least risk bomb location" is nothing new. All planes have a designated area where potentially dangerous packages should be placed. Usually it's in the back, adjacent to a door. There are a slew of procedures to be followed if an explosive device is found on board: depressurizing the plane, moving the item to the LRBL, and bracing/smothering it with luggage and other dense materials so that the force of the blast is directed outward, through the door.

Posted on April 20, 2007 at 1:39 PM34 Comments

Social Engineering Notes

This is a fantastic story of a major prank pulled off at the Super Bowl this year. Basically, five people smuggled more than a quarter of a ton of material into Dolphin Stadium in order to display their secret message on TV. A summary:

Just days after the Boston bomb scare, another team of Boston-based pranksters smuggled and distributed 2,350 suspicious light-up devices into the Super Bowl. Due to its attractiveness as a terrorist target, Dolphin Stadium was on a Level One security alert, a level usually reserved for Presidential inaugurations. By posing as media reporters, the pranksters were able to navigate 95 boxes through federal marshals, Homeland Security agents, bomb squads, police dogs, and a five-ton X-ray crane.

Given all the security, it's amazing how easy it was for them to become part of the security perimeter with all that random stuff. But to those of us who follow this thing, it shouldn't be. His observations are spot on:

1. Wear a suit.
2. Wear a Bluetooth headset.
3. Pretend to be talking loudly to someone on the other line.
4. Carry a clipboard.
5. Be white.

Again, no surprise here. But it makes you wonder what's the point of annoying the hell out of ordinary citizens with security measures (like pat down searches) when the emperor has no clothes.

Someone who crashed the Oscars last year gave similar advice:

Show up at the theater, dressed as a chef carrying a live lobster, looking really concerned.

On a much smaller scale, here's someone's story of social engineering a bank branch:

I enter the first branch at approximately 9:00AM. Dressed in Dickies coveralls, a baseball cap, work boots and sunglasses I approach the young lady at the front desk.

"Hello," I say. "John Doe with XYZ Pest Control, here to perform your pest inspection.?? I flash her the smile followed by the credentials. She looks at me for a moment, goes "Uhm… okay… let me check with the branch manager…" and picks up the phone. I stand around twiddling my thumbs and wait while the manager is contacted and confirmation is made. If all goes according to plan, the fake emails I sent out last week notifying branch managers of our inspection will allow me access.

It does.

Social engineering is surprisingly easy. As I said in Beyond Fear (page 144):

Social engineering will probably always work, because so many people are by nature helpful and so many corporate employees are naturally cheerful and accommodating. Attacks are rare, and most people asking for information or help are legitimate. By appealing to the victim’s natural tendencies, the attacker will usually be able to cozen what she wants.

All it takes is a good cover story.

EDITED TO ADD (4/20): The first commenter suggested that the Zug story is a hoax. I think he makes a good argument, and I have no evidence to refute it. Does anyone know for sure?

EDITED TO ADD (4/21): Wired concludes that the Super Bowl stunt happened, but that no one noticed. Engaget is leaning toward hoax.

Posted on April 20, 2007 at 6:41 AM66 Comments

Citizen-Counterterrorist Training Video

From the Michigan State Police. The seven signs, according to the video:

Tests of security
Acquiring supplies
Suspicious people who "don't belong"
Dry runs/trial runs
Deploying assets or getting into position

I especially like the scenes of concerned citizens calling the police. Anyone care to guess what the false alarm rate would be if everyone started making phone calls like this?

Posted on April 19, 2007 at 2:15 PM44 Comments

A Security Market for Lemons

More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.

Encryption is the obvious solution for this problem -- I use PGPdisk -- but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.

Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when completely broke its security. There's no data self-destruct feature. The password protection can easily be bypassed. The data isn't even encrypted. As a secure storage device, Secustick is pretty useless.

On the surface, this is just another snake-oil security story. But there's a deeper question: Why are there so many bad security products out there? It's not just that designing good security is hard -- although it is -- and it's not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?

In 1970, American economist George Akerlof wrote a paper called "The Market for 'Lemons'" (abstract and article for pay here), which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.

Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can't tell the difference -- at least until he's made his purchase. I'll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.

This means that the best cars don't get sold; their prices are too high. Which means that the owners of these best cars don't put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.

In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.

The computer security market has a lot of the same characteristics of Akerlof's lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives -- Kingston Technology sent me one in the mail a few days ago -- but even I couldn't tell you if Kingston's offering is better than Secustick. Or if it's better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can't tell the difference, most consumers won't be able to either.

Of course, it's more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.

I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that "won" weren't the most secure firewalls; they were the ones that were easy to set up, easy to use and didn't annoy users too much. Because buyers couldn't base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren't the most secure, because buyers couldn't tell the difference.

How do you solve this? You need what economists call a "signal," a way for buyers to tell the difference. Warranties are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.

Secustick, for one, seems to have been withdrawn from sale.

But security testing is both expensive and slow, and it just isn't possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product -- a firewall, an IDS -- is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.

In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.

All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers' Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus "number of signatures" comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.

With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don't have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.

This essay originally appeared in Wired.

EDITED TO ADD (4/22): Slashdot thread.

Posted on April 19, 2007 at 7:59 AM51 Comments

Cameras "Predict" Crimes

New developments from surveillance-camera-happy England:

The £7,000 device, nicknamed "the Bug", consists of a ring of eight cameras scanning in all directions. It uses software to detect whether anybody is walking or loitering in a way that marks them out from the crowd. A ninth camera then zooms in to follow them if it thinks they are behaving suspiciously.


"The camera picks up on unusual movement, zooms in on someone and gathers evidence from a face and clothing, acting as a 24-hour operator without someone having to be there," said Jason Butler, head of CCTV at Luton borough council. "We have kids with Asbos telling us they hate the thing because it follows them wherever they go."

This is interesting. It moves us further along the continuum into thoughtcrimes, but near as I can tell, the system just collects evidence on people it thinks suspicious, just in case. Assuming the data is erased immediately after, it's much less invasive than actually accosting someone for thoughtcrime; the costs for false alarms is minimal.

I doubt it works nearly as well as the article claims, but that's likely to change in 5 to 10 years. For example, there's a lot of research being done in the area of microfacial expressions to detect lying and other thoughts. This is the sort of technological advance that we need to be talking about in terms of security, privacy, and liberty.

Posted on April 19, 2007 at 6:20 AM41 Comments

Arresting Children

A disturbing trend.

These are not the sorts of matters the police should be getting involved in. The police aren't trained to handle children this age, and children this age don't benefit by being fingerprinted and thrown in jail.

EDITED TO ADD (4/18): Another example:

Unfortunately, the school forgot that the clocks had switched to Daylight Saving Time that morning. The time stamps left on the hotline were adjusted by an hour after Day Light Savings causing Webb's call to logged as the same time the bomb threat was placed. Webb, who's never even had a detention in his life, had actually made his call an hour before the bomb threat was placed.

Despite the fact that the recording of the call featured a voice that sounded nothing like Webb's, the police arrested Webb and he spent 12 days in a juvenile detention facility before the school eventually realised their mistake.

Posted on April 18, 2007 at 12:02 PM66 Comments

Foiling Bank Robbers with Kindness

Seems to work:

The method is a sharp contrast to the traditional training for bank employees confronted with a suspicious person, which advises not approaching the person, and at most, activating an alarm or dropping an exploding dye pack into the cash.

When a man walked into a First Mutual branch last year wearing garden gloves and sunglasses, manager Scott Taffera greeted him heartily, invited him to remove the glasses, and guided him to an equally friendly teller. The man eventually asked for a roll of quarters and left.

Carr said he suspects the man was the "Garden Glove Bandit," who robbed area banks between March 2004 and November 2006.

What I like about this security system is that it fails really well in the event of a false alarm. There's nothing wrong with being extra nice to a legitimate customer.

Posted on April 18, 2007 at 6:24 AM39 Comments

Another Boston Terrorism Overreaction

Are these people trying to be stupid?

Sofia Loginova, 17, a Quincy High senior, said she didn't hire anyone to hang four backpacks emblazoned with her Web site's name,, and filled with newspapers and some dollar bills, from tree limbs and on a fence near the school. The unexplained backpacks sparked a panic.

The State Police Bomb Squad brought in a mechanical robot and a bomb-sniffing dog to investigate, immediately bringing to mind the Cartoon Network marketing ploy that shut down parts of Boston for hours earlier this year.

Terrorism used to be hard. Now all you have to is hang backpacks from trees near schools.

Refuse to be terrorized, people!

Posted on April 17, 2007 at 7:23 AM83 Comments

DHS No Longer Gets Failing Cybersecurity Grade

They got a D.

The rest of the U.S. government didn't do very well. Eight of twenty-four departments (including the Department of Defense) failed. Overall, the federal government received a C- (up from a D+ last year).

Posted on April 16, 2007 at 6:36 AM26 Comments

U.S. Government Contractor Injects Malicious Software into Critical Military Computers

This is just a frightening story. Basically, a contractor with a top secret security clearance was able to inject malicious code and sabotage computers used to track Navy submarines.

Yeah, it was annoying to find and fix the problem, but hang on. How is it possible for a single disgruntled idiot to damage a multi-billion-dollar weapons system? Why aren't there any security systems in place to prevent this? I'll bet anything that there was absolutely no control or review over who put what code in where. I'll bet that if this guy had been just a little bit cleverer, he could have done a whole lot more damage without ever getting caught.

One of the ways to deal with the problem of trusted individuals is by making sure they're trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.

I'll also bet you anything that Microsoft has more security around its critical code than the U.S. military does.

Posted on April 13, 2007 at 12:33 PM48 Comments

Bank Botches Two-Factor Authentication

From their press release:

The computer was protected by two layers of security, a unique user-identifier and a multiple-character, alpha-numeric password.

Um, hello? Having a username and a password -- even if they're both secret -- does not count as two factors, two layers, or two of anything. You need to have two different authentication systems: a password and a biometric, a password and a token.

I wouldn't trust the New Horizons Community Credit Union with my money.

Posted on April 13, 2007 at 7:33 AM68 Comments

Childhood Safety vs. Childhood Health

Another example of how we get the risks wrong:

Although statistics show that rates of child abduction and sexual abuse have marched steadily downward since the early 1990s, fear of these crimes is at an all-time high. Even the panic-inducing Megan's Law Web site says stranger abduction is rare and that 90 percent of child sexual-abuse cases are committed by someone known to the child. Yet we still suffer a crucial disconnect between perception of crime and its statistical reality. A child is almost as likely to be struck by lightning as kidnapped by a stranger, but it's not fear of lightning strikes that parents cite as the reason for keeping children indoors watching television instead of out on the sidewalk skipping rope.

And when a child is parked on the living room floor, he or she may be safe, but is safety the sole objective of parenting? The ultimate goal is independence, and independence is best fostered by handing it out a little at a time, not by withholding it in a trembling fist that remains clenched until it's time to move into the dorms.

Meanwhile, as rates of child abduction and abuse move down, rates of Type II diabetes, hypertension and other obesity-related ailments in children move up. That means not all the candy is coming from strangers. Which scenario should provoke more panic: the possibility that your child might become one of the approximately 100 children who are kidnapped by strangers each year, or one of the country's 58 million overweight adults?

Posted on April 12, 2007 at 6:05 AM47 Comments

German Police Want the Right to Hack Computers


German Interior Minister Wolfgang Schaeuble has confirmed plans to seek a change to the constitution to allow the state secret access to the computers of private individuals, in an interview published Thursday.

Supposedly Switzerland is also considering a similar law.

Posted on April 11, 2007 at 1:36 PM40 Comments

There Aren't That Many Serious Spammers Out There

Interesting analysis:

If there's only a few large gangs operating -- and other people are detecting these huge swings of activity as well -- then that's very significant for public policy. One can have sympathy for police officers and regulators faced with the prospect of dealing with hundreds or thousands of spammers; dealing with them all would take many (rather boring and frustrating) lifetimes. But if there are, say, five, big gangs at most -- well that's suddenly looking like a tractable problem.

Spam is costing us [allegedly] billions (and is a growing problem for the developing world), so there's all sorts of economic and diplomatic reasons for tackling it. So tell your local spam law enforcement officials to have a look at the graph of Demon Internet's traffic. It tells them that trying to do something about the spammers currently makes a lot of sense -- and that by just tracking down a handful of people, they will be capable of making a real difference!

Posted on April 11, 2007 at 6:41 AM35 Comments

Marx Brothers on Security

Count the security lessons: bad password management, protocol failures, poor authentication, check fraud, and -- I suppose -- an attack made possible by poor bounds checking. What else?

Posted on April 10, 2007 at 1:17 PM21 Comments

Ordinary People Being Labeled as Terrorists

By law, every business has to check their customers against a list of "specially designated nationals," and not do business with anyone on that list.

Of course, the list is riddled with bad names and many innocents get caught up in the net. And many businesses decide that it's easier to turn away potential customers with whose name is on the list, creating -- well -- a shunned class:

Tom Kubbany is neither a terrorist nor a drug trafficker, has average credit and has owned homes in the past, so the Northern California mental-health worker was baffled when his mortgage broker said lenders were not interested in him. Reviewing his loan file, he discovered something shocking. At the top of his credit report was an OFAC alert provided by credit bureau TransUnion that showed that his middle name, Hassan, is an alias for Ali Saddam Hussein, purportedly a "son of Saddam Hussein."

The record is not clear on whether Ali Saddam Hussein was a Hussein offspring, but the OFAC list stated he was born in 1980 or 1983. Kubbany was born in Detroit in 1949.

Under OFAC guidance, the date discrepancy signals a false match. Still, Kubbany said, the broker decided not to proceed. "She just talked with a bunch of lenders over the phone and they said, 'No,' " he said. "So we said, 'The heck with it. We'll just go somewhere else.' "

Kubbany and his wife are applying for another loan, though he worries that the stigma lingers. "There's a dark cloud over us," he said. "We will never know if we had qualified for the mortgage last summer, then we might have been in a house now."

Saad Ali Muhammad is an African American who was born in Chicago and converted to Islam in 1980. When he tried to buy a used car from a Chevrolet dealership three years ago, a salesman ran his credit report and at the top saw a reference to "OFAC search," followed by the names of terrorists including Osama bin Laden. The only apparent connection was the name Muhammad. The credit report, also by TransUnion, did not explain what OFAC was or what the credit report user should do with the information. Muhammad wrote to TransUnion and filed a complaint with a state human rights agency, but the alert remains on his report, Sinnar said.

Colleen Tunney-Ryan, a TransUnion spokeswoman, said in an e-mail that clients using the firm's credit reports are solely responsible for any action required by federal law as a result of a potential match and that they must agree they will not take any adverse action against a consumer based solely on the report.

The lawyers' committee documented other cases, including that of a couple in Phoenix who were about to close on their first home, only to be told the sale could not proceed because the husband's first and last names -- common Hispanic names -- matched an entry on the OFAC list. The entry did not include a date or place of birth, which could have helped distinguish the individuals.

In another case, a Roseville, Calif., couple wanted to buy a treadmill from a home fitness store on a financing plan. A bank representative told the salesperson that because the husband's first name was Hussein, the couple would have to wait 72 hours while they were investigated. Though the couple eventually received the treadmill, they were so embarrassed by the incident they did not want their names in the report, Sinnar said.

This is the same problem as the no-fly list, only in a larger context. And it's no way to combat terrorism. Thankfully, many businesses don't know to check this list and people whose names are similar to suspected terrorists' can still lead mostly normal lives. But the trend here is not good.

Posted on April 10, 2007 at 6:23 AM69 Comments

Cameras in the UK

The UK police are considering mandating the quality of commercial CCTV cameras to ensure that the images meet their evidence standards.

More on cameras. Also this.

Posted on April 9, 2007 at 12:26 PM24 Comments

Dept of Homeland Security Wants DNSSEC Keys

This is a big deal:

The shortcomings of the present DNS have been known for years but difficulties in devising a system that offers backward compatability while scaling to millions of nodes on the net have slowed down the implementation of its successor, Domain Name System Security Extensions (DNSSEC). DNSSEC ensures that domain name requests are digitally signed and authenticated, a defence against forged DNS data, a product of attacks such as DNS cache poisoning used to trick surfers into visiting bogus websites that pose as the real thing.

Obtaining the master key for the DNS root zone would give US authorities the ability to track DNS Security Extensions (DNSSec) "all the way back to the servers that represent the name system's root zone on the internet".

Access to the "key-signing key" would give US authorities a supervisory role over DNS lookups, vital for functions ranging from email delivery to surfing the net. At a recent ICANN meeting in Lisbon, Bernard Turcotte, president of the Canadian Internet Registration Authority, said managers of country registries were concerned about the proposal to allow the US to control the master keys, giving it privileged control of internet resources, Heise reports.

Another news report.

Posted on April 9, 2007 at 9:45 AM60 Comments

Friday Squid Blogging: Giant Squids off California!

Seems like there's some hysteria in the making:

They are deadly, huge and fast moving. Their tentacles can suck the life out of a human being and they've arrived in Northern California.

The whole article is like that.

Posted on April 6, 2007 at 3:00 PM32 Comments

Consequences of a Nuclear Explosion in an American City

This paper, from February's International Journal of Health Geographics, (abstract here), analyzes the consequences of a nuclear attack on several American cities and points out that burn unit capacity nationwide is far too small to accommodate the victims. It says just training people to flee crosswind could greatly reduce deaths from fallout.


The effects of 20 kiloton and 550 kiloton nuclear detonations on high priority target cities are presented for New York City, Chicago, Washington D.C. and Atlanta. Thermal, blast and radiation effects are described, and affected populations are calculated using 2000 block level census data. Weapons of 100 Kts and up are primarily incendiary or radiation weapons, able to cause burns and start fires at distances greater than they can significantly damage buildings, and to poison populations through radiation injuries well downwind in the case of surface detonations. With weapons below 100 Kts, blast effects tend to be stronger than primary thermal effects from surface bursts. From the point of view of medical casualty treatment and administrative response, there is an ominous pattern where these fatalities and casualties geographically fall in relation to the location of hospital and administrative facilities. It is demonstrated that a staggering number of the main hospitals, trauma centers, and other medical assets are likely to be in the fatality plume, rendering them essentially inoperable in a crisis.


Among the consequences of this outcome would be the probable loss of command-and-control, mass casualties that will have to be treated in an unorganized response by hospitals on the periphery, as well as other expected chaotic outcomes from inadequate administration in a crisis. Vigorous, creative, and accelerated training and coordination among the federal agencies tasked for WMD response, military resources, academic institutions, and local responders will be critical for large-scale WMD events involving mass casualties.

I've long said that emergency response is something we should be spending money on. This kind of analysis is both interesting and helpful.

A commentary.

Posted on April 6, 2007 at 10:24 AM25 Comments


Last month Marine General James Cartwright told the House Armed Services Committee that the best cyber defense is a good offense.

As reported in Federal Computer Week, Cartwright said: "History teaches us that a purely defensive posture poses significant risks," and that if "we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests."

The general isn't alone. In 2003, the entertainment industry tried to get a law passed giving them the right to attack any computer suspected of distributing copyrighted material. And there probably isn't a sys-admin in the world who doesn't want to strike back at computers that are blindly and repeatedly attacking their networks.

Of course, the general is correct. But his reasoning illustrates perfectly why peacetime and wartime are different, and why generals don't make good police chiefs.

A cyber-security policy that condones both active deterrence and retaliation -- without any judicial determination of wrongdoing -- is attractive, but it's wrongheaded, not least because it ignores the line between war, where those involved are permitted to determine when counterattack is required, and crime, where only impartial third parties (judges and juries) can impose punishment.

In warfare, the notion of counterattack is extremely powerful. Going after the enemy -- its positions, its supply lines, its factories, its infrastructure -- is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.

Both vigilante counterattacks, and pre-emptive attacks, fly in the face of these rights. They punish people before who haven't been found guilty. It's the same whether it's an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.

In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it's even harder to know who's attacking you. Just because my computer looks like the source of an attack doesn't mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government's legal system is justice; the goal of a vigilante is expediency.

I understand the frustrations of General Cartwright, just as I do the frustrations of the entertainment industry, and the world's sys-admins. Justice in cyberspace can be difficult. It can be hard to figure out who is attacking you, and it can take a long time to make them stop. It can be even harder to prove anything in court. The international nature of many attacks exacerbates the problems; more and more cybercriminals are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily bribable police forces and no extradition treaties.

Revenge is appealingly straightforward, and treating the whole thing as a military problem is easier than working within the legal system.

But that doesn't make it right. In 1789, the Declaration of the Rights of Man and of the Citizen declared: "No person shall be accused, arrested, or imprisoned except in the cases and according to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be executed any arbitrary order shall be punished."

I'm glad General Cartwright thinks about offensive cyberwar; it's how generals are supposed to think. I even agree with Richard Clarke's threat of military-style reaction in the event of a cyber-attack by a foreign country or a terrorist organization. But short of an act of war, we're far safer with a legal system that respects our rights.

This essay originally appeared in Wired.

Posted on April 5, 2007 at 7:35 AM50 Comments

Breaking WEP in Under a Minute

WEP (Wired Equivalent Privacy) was the protocol used to secure wireless networks. It's known to be insecure and has been replaced by Wi-Fi Protected Access, but it's still in use.

This paper, "Breaking 104 bit WEP in less than 60 seconds," is the best attack against WEP to date:


We demonstrate an active attack on the WEP protocol that is able to recover a 104-bit WEP key using less than 40.000 frames with a success probability of 50%. In order to succeed in 95% of all cases, 85.000 packets are needed. The IV of these packets can be randomly chosen. This is an improvement in the number of required frames by more than an order of magnitude over the best known key-recovery attacks for WEP. On a IEEE 802.11g network, the number of frames required can be obtained by re-injection in less than a minute. The required computational effort is approximately 2^20 RC4 key setups, which on current desktop and laptop CPUs in negligible.

Posted on April 4, 2007 at 12:46 PM28 Comments

Story of a Credit Card Fraudster

A two-part story from The Guardian: an excerpt from Other People's Money: The Rise And Fall Of Britain's Most Audacious Credit Card Fraudster.

The first time I did the WTS, it was on a man from London who was staying in a £400 hotel room in Glasgow. I used my hotel phone trick to get his card and personal information -- fortunately, he was a trusting individual. I then called his card company and explained that I was the gentleman concerned, in Glasgow on business, and had suffered the theft of my wallet and passport. I was understandably distraught, lying on my bed in Battlefield and speaking quietly so my parents couldn't hear, and wondered what the company suggested I do. The sympathetic woman at the other end proposed I take a cash advance set against my account, which they could have ready for collection within a couple of hours at a wire transfer operator.

Posted on April 4, 2007 at 6:25 AM16 Comments

VBootkit Bypasses Vista's Code Signing Mechanisms

Interesting work:

Experts say that the fundamental problem that this highlights is that every stage in Vista's booting process works on blind faith that everything prior to it ran cleanly. The boot kit is therefore able to copy itself into the memory image even before Vista has booted and capture interrupt 13, which operating systems use for read access to sectors of hard drives, among other things.

This is not theoretical; VBootkit is actual code that demonstrates this.

Posted on April 3, 2007 at 12:51 PM51 Comments

"Papers, Please"

Great essay from 1990 by Bill Holm:

No papers, no pay. It's an interesting equation, and I think it has not surfaced before in Minnesota. Neither of my Icelandic grandfathers, for instance, had papers enough to work in Marshall, and if you're an old Minnesotan, it's unlikely that your grandfathers did, either. Viking wetbacks, they were.

Though Section 1324A, Title 8, of the U. S. Immigration Code was passed by Congress during my nonnewspaper-reading absence in central China, it doesn't take much thinking to figure out its rationale: it is intended, to use the vulgar cliché, to "stem the flood" of illegal Mexican labor. It also doesn't take much intelligence to figure out that if you're a Mexican laborer in southern California and know you have to sign this silly form, you will promptly dummy up an "original" Social Security card and a driver's license or birth certificate. Meanwhile, imagine Enrique Lopez, whose family has been in California since before Plymouth Rock, being abused by an officious bureaucrat because, like the rest of us, his "original" Social Security card disappeared down his Maytag twenty-five years ago. Visualize this. And then visualize the Senate debate on this legislation. As Mark Twain said, the true native American criminal class must certainly be Congress, and its behavior in this case is a nice mixture of hypocrisy, cowardice and thoughtlessness.

A friend, after hearing me in high dudgeon and confessing that he had himself signed such a form with silent misgivings, suggested that I might be more sensitive to such issues because of my recent return from China. If this is true, it is a harsh and sad comment both about me and about American citizens generally. If we have to spend a year in an authoritarian country producing papers on demand before we become sensitized to the moral and political dangers of Section 1324A, then we are already a nation of slaves, passive and agreeable, ready for Orwell's eternal "boot in the human face."

I'm curious what he's thinking today.

Posted on April 3, 2007 at 7:40 AM31 Comments

JavaScript Hijacking

Interesting paper on JavaScript Hijacking: a new type of eavesdropping attack against Ajax-style Web applications. I'm pretty sure it's the first type of attack that specifically targets Ajax code. The attack is possible because Web browsers don't protect JavaScript the same way they protect HTML; if a Web application transfers confidential data using messages written in JavaScript, in some cases the messages can be read by an attacker.

The authors show that many popular Ajax programming frameworks do nothing to prevent JavaScript hijacking. Some actually require a programmer to create a vulnerable server in order to function.

Like so many of these sorts of vulnerabilities, preventing the class of attacks is easy. In many cases, it requires just a few additional lines of code. And like so many software security problems, programmers need to understand the security implications of their work so that they can mitigate the risks they face. But my guess is that JavaScript hijacking won't be solved so easily, because programmers don't understand the security implications of their work and won't prevent the attacks.

Posted on April 2, 2007 at 3:45 PM64 Comments

TSA Failures in the News

I'm not sure which is more important -- the news or the fact that no one is surprised:

Sources told 9NEWS the Red Team was able to sneak about 90 percent of simulated weapons past checkpoint screeners in Denver. In the baggage area, screeners caught one explosive device that was packed in a suitcase. However later, screeners in the baggage area missed a book bomb, according to sources.

"There's very little substance to security," said former Red Team leader Bogdan Dzakovic. "It literally is all window dressing that we're doing. It's big theater on TV and when you go to the airport. It's just security theater."

Dzakovic was a Red Team leader from 1995 until September 11, 2001. After the terrorist attacks, Dzakovic became a federally protected whistleblower and alleged that thousands of people died needlessly. He testified before the 9/11 Commission and the National Commission on Terrorist Attacks Upon the US that the Red Team "breached security with ridiculous ease up to 90 percent of the time," and said the FAA "knew how vulnerable aviation security was."

Dzakovic, who is currently a TSA inspector, said security is no better today.

"It's worse now. The terrorists can pretty much do what they want when they want to do it," he said.

Posted on April 2, 2007 at 12:16 PM47 Comments

2006 Operating System Vulnerability Study

Long, but interesting.


While there are an enormous variety of operating systems to choose from, only four "core" lineages exist in the mainstream -- Windows, OS X, Linux and UNIX. Each system carries its own baggage of vulnerabilities ranging from local exploits and user introduced weaknesses to remotely available attack vectors.

As far as "straight-out-of-box" conditions go, both Microsoft's Windows and Apple's OS X are ripe with remotely accessible vulnerabilities. Even before enabling the servers, Windows based machines contain numerous exploitable holes allowing attackers to not only access the system but also execute arbitrary code. Both OS X and Windows were susceptible to additional vulnerabilities after enabling the built-in services. Once patched, however, both companies support a product that is secure, at least from the outside. The UNIX and Linux variants present a much more robust exterior to the outside. Even when the pre-configured server binaries are enabled, each system generally maintained its integrity against remote attacks. Compared with the Microsoft and Apple products, however, UNIX and Linux systems tend to have a higher learning curve for acceptance as desktop platforms.

When it comes to business, most systems have the benefit of trained administrators and IT departments to properly patch and configure the operating systems and their corresponding services. Things are different with home computers. The esoteric nature of the UNIX and Linux systems tend to result in home users with an increased understanding of security concerns. An already "hardened" operating system therefore has the benefit of a knowledgeable user base. The more consumer oriented operating systems made by Microsoft and Apple are each hardened in their own right. As soon as users begin to arbitrarily enable remote services or fiddle with the default configurations, the systems quickly become open to intrusion. Without a diligence for applying the appropriate patches or enabling automatic updates, owners of Windows and OS X systems are the most susceptible to quick and thorough remote violations by hackers.

Posted on April 2, 2007 at 7:38 AM32 Comments

Security-Related April Fool's Jokes

My favorite so far: "Window Transparency Information Disclosure."

An information disclosure attack can be launched against buildings that make use of windows made of glass or other transparent materials by observing externally-facing information through the window.

There's also "Technology retrieves sounds in the wall":

Every wall in a room is made up of millions and millions of atoms. Each atom is a collection of electrons, protons and neutrons - all electrically charged and constantly moving.

When anyone inside the four walls of a room speaks, the sound carries energy that travels in waves and hits the walls. When this voice energy hits the atoms in a wall the electrons and protons are disturbed.

Each word spoken hits the atoms with a different energy level and disturbs the atoms differently.

Scientists have worked on the software and technology that can measure how each atom has been disturbed and match each unique disturbance with a unique word.

The technology virtually "replays" the sequence of words that have been spoken inside the walls. It's like rewinding a tape recorder and you can go as far back in history as you want.

If you find any others, please post them in the comments. This is the canonical list of April Fool's jokes on the web.

EDITED TO ADD (4/1): "Threat Alert" Jesus.

EDITED TO ADD (4/2): And this by Jim Harper.

Posted on April 1, 2007 at 11:23 AM27 Comments

Announcing: Second Annual Movie-Plot Threat Contest

The first Movie-Plot Threat Contest asked you to invent a horrific and completely ridiculous, but plausible, terrorist plot. All the entrants were worth reading, but Tom Grant won with his idea to crash an explosive-filled plane into the Grand Coulee Dam.

This year the contest is a little different. We all know that a good plot to blow up an airplane will cause the banning, or at least screening, of something innocuous. If you stop and think about it, it's a stupid response. We screened for guns and bombs, so the terrorists used box cutters. We took away box cutters and small knives, so they hid explosives in their shoes. We started screening shoes, so they planned to use liquids. We now confiscate liquids (even though experts agree the plot was implausible)...and they're going to do something else. We can't win this game, so why are we playing?

Well, we are playing. And now you can, too. Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.

Make the TSA ban wristwatches. Or laptop computers. Or polyester. Or zippers over three inches long. You get the idea.

Your entry will be judged on the common item that the TSA has no choice but to ban, as well as the cleverness of the plot. It has to be realistic; no science fiction, please. And the write-up is critical; last year the best entries were the most entertaining to read.

As before, assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.

Post your movie plots here on this blog.

Judging will be by me, swayed by popular acclaim in the blog comments section. The prize will be an autographed copy of Beyond Fear (in both English and Japanese) and the adulation of your peers. And, if I can swing it -- I couldn't last year -- a phone call with a real live movie producer.

Entries close at the end of the month -- April 30 -- so Crypto-Gram readers can also play.

This is not an April Fool's joke, although it's in the spirit of the season. The purpose of this contest is absurd humor, but I hope it also makes a point. Terrorism is a real threat, but we're not any safer through security measures that require us to correctly guess what the terrorists are going to do next.

EDITED TO ADD (6/15): Winner here.

Posted on April 1, 2007 at 6:46 AM364 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.