Blog: April 2006 Archives

Microsoft and Internet Explorer

John Dvorak makes an interesting argument that Internet Explorer was Microsoft’s greatest mistake ever. Certainly its decision to tightly integrate IE with the operating system—done as an anti-competitive maneuver against Netscape during the Browser Wars—has resulted in some enormous security problems that Microsoft has still not recovered from. Not even with the introduction of IE7.

Posted on April 28, 2006 at 12:29 PM65 Comments

NSA Warrantless Wiretapping and Total Information Awareness

Technology Review has an interesting article discussing some of the technologies used by the NSA in its warrantless wiretapping program, some of them from the killed Total Information Awareness (TIA) program.

Washington’s lawmakers ostensibly killed the TIA project in Section 8131 of the Department of Defense Appropriations Act for fiscal 2004. But legislators wrote a classified annex to that document which preserved funding for TIA’s component technologies, if they were transferred to other government agencies, say sources who have seen the document, according to reports first published in The National Journal. Congress did stipulate that those technologies should only be used for military or foreign intelligence purposes against non-U.S. citizens. Still, while those component projects’ names were changed, their funding remained intact, sometimes under the same contracts.

Thus, two principal components of the overall TIA project have migrated to the Advanced Research and Development Activity (ARDA), which is housed somewhere among the 60-odd buildings of “Crypto City,” as NSA headquarters in Fort Meade, MD, is nicknamed. One of the TIA components that ARDA acquired, the Information Awareness Prototype System, was the core architecture that would have integrated all the information extraction, analysis, and dissemination tools developed under TIA. According to The National Journal, it was renamed “Basketball.” The other, Genoa II, used information technologies to help analysts and decision makers anticipate and pre-empt terrorist attacks. It was renamed “Topsail.”

Posted on April 28, 2006 at 8:01 AM17 Comments

Da Vinci Code Ruling Code

There is a code embedded in the ruling in The Da Vinci Code plagiarism case.

You can find it by searching for the characters in italic and boldface scattered throughout the ruling. The first characters spell out “SMITHCODE”: that’s the name of the judge who wrote the ruling The rest remains unsolved.

According to The Times, the remaining letters are: J, a, e, i, e, x, t, o, s, t, p, s, a, c, g, r, e, a, m, q, w, f, k, a, d, p, m, q, z.

According to The Register, the remaining letters are: j a e i e x t o s t g p s a c g r e a m q w f k a d p m q z v.

According to one of my readers, who says he “may have missed some letters,” it’s: SMITHYCODEJAEIEXTOSTGPSACGREAMQWFKADPMQZV.

I think a bunch of us need to check for ourselves, and then compare notes.

And then we have to start working on solving the thing.

From the BBC:

Although he would not be drawn on his code and its meaning, Mr Justice Smith said he would probably confirm it if someone cracked it, which was “not a difficult thing to do”.

As an aside, I am mentioned in Da Vinci Code. No, really. Page 199 of the American hardcover edition. “Da Vinci had been a cryptography pioneer, Sophie knew, although he was seldom given credit. Sophie’s university instructors, while presenting computer encryption methods for securing data, praised modern cryptologists like Zimmermann and Schneier but failed to mention that it was Leonardo who had invented one of the first rudimentary forms of public key encryption centuries ago.”

That’s right. I am a realistic background detail.

EDITED TO ADD (4/28): The code is broken. Details are in The New York Times:

Among Justice Smith’s hints, he told decoders to look at page 255 in the British paperback edition of “The Da Vinci Code,” where the protagonists discuss the Fibonacci Sequence, a famous numerical series in which each number is the sum of the two preceding ones. Omitting the zero as Dan Brown, “The Da Vinci Code” author, does the series begins 1, 1, 2, 3, 5, 8, 13, 21.

Solving the judge’s code requires repeatedly applying the Fibonacci Sequence, through the number 21, to the apparently random coded letters that appear in boldfaced italics in the text of his ruling: JAEIEXTOSTGPSACGREAMQWFKADPMQZVZ.

For example, the fourth letter of the coded message is I. The fourth number of the Fibonacci Sequence, as used in “The Da Vinci Code,” is 3. Therefore, decoding the I requires an alphabet that starts at the third letter of the regular alphabet, C. I is the ninth letter regularly; the ninth letter of the alphabet starting with C is K; thus, the I in the coded message stands for the letter K.

The judge inserted two twists to confound codebreakers. One is a typographical error: a letter that should have been an H in both the coded message and its translation is instead a T. The other is drawn from “Holy Blood, Holy Grail,” the other book in the copy right case. It concerns the number 2 in the Fibonacci series, which becomes a requirement to count two letters back in the regular alphabet rather than a signal to use an alphabet that begins with B. For instance, the first E in the coded message, which corresponds to a 2 in the Fibonacci series, becomes a C in the answer.

The message reads: “Jackie Fisher who are you Dreadnought.”

I’m disappointed, actually. That was a whopper of a hint, and I would have preferred the judge to keep quiet.

EDITED TO ADD (5/8): Commentary on my name being in The Da Vinci Code.

Posted on April 27, 2006 at 6:47 PM48 Comments

New Directions in Malware

Kaspersky Labs reports on extortion scams using malware:

We’ve reported more than once on cases where remote malicious users have moved away from the stealth use of infected computers (stealing data from them, using them as part of zombie networks etc) to direct blackmail, demanding payment from victims. At the moment, this method is used in two main ways: encrypting user data and corrupting system information.

Users quickly understand that something has happened to their data. They are then told that they should send a specific sum to an e-payment account maintained by the remote malicious user, whether it be EGold, Webmoney or whatever. The ransom demanded varies significantly depending on the amount of money available to the victim. We know of cases where the malicious users have demanded $50, and of cases where they have demanded more than $2,000. The first such blackmail case was in 1989, and now this method is again gaining in popularity.

In 2005, the most striking examples of this type of cybercrime were carried out using the Trojans GpCode and Krotten. The first of these encrypts user data; the second restricts itself to making a number of modifications to the victim machine’s system registry, causing it to cease functioning.

Among other worms, the article discusses the GpCode.ac worm, which encrypts data using 56-bit RSA (no, that’s not a typo). The whole article is interesting reading.

Posted on April 26, 2006 at 1:07 PM58 Comments

The Security Risk of Special Cases

In Beyond Fear, I wrote about the inherent security risks of exceptions to a security policy. Here’s an example, from airport security in Ireland.

Police officers are permitted to bypass airport security at the Dublin Airport. They flash their ID, and walk around the checkpoints.

A female member of the airport search unit is undergoing re-training after the incident in which a Department of Transport inspector passed unchecked through security screening.

It is understood that the department official was waved through security checks having flashed an official badge. The inspector immediately notified airport authorities of a failure in vetting procedures. Only gardai are permitted to pass unchecked through security.

There are two ways this failure could have happened. One, security person could have thought that Department of Transportation officials have the same privileges as police officers. And two, the security person could have thought she was being shown a police ID.

This could have just as easily been a bad guy showing a fake police ID. My guess is that the security people don’t check them all that carefully.

The meta-point is that exceptions to security are themselves security vulnerabilities. As soon as you create a system by which some people can bypass airport security checkpoints, you invite the bad guys to try and use that system. There are reasons why you might want to create those alternate paths through security, of course, but the trade-offs should be well thought out.

Posted on April 26, 2006 at 6:05 AM30 Comments

Digital Cameras Have Unique Fingerprints

Interesting research:

Fridrich’s technique is rooted in the discovery by her research group of this simple fact: Every original digital picture is overlaid by a weak noise-like pattern of pixel-to-pixel non-uniformity.

Although these patterns are invisible to the human eye, the unique reference pattern or “fingerprint” of any camera can be electronically extracted by analyzing a number of images taken by a single camera.

That means that as long as examiners have either the camera that took the image or multiple images they know were taken by the same camera, an algorithm developed by Fridrich and her co-inventors to extract and define the camera’s unique pattern of pixel-to-pixel non-uniformity can be used to provide important information about the origins and authenticity of a single image.

The limitation of the technique is that it requires either the camera or multiple images taken by the same camera, and isn’t informative if only a single image is available for analysis.

Like actual fingerprints, the digital “noise” in original images is stochastic in nature ­ that is, it contains random variables ­ which are inevitably created during the manufacturing process of the camera and its sensors. This virtually ensures that the noise imposed on the digital images from any particular camera will be consistent from one image to the next, even while it is distinctly different.

In preliminary tests, Fridrich’s lab analyzed 2,700 pictures taken by nine digital cameras and with 100 percent accuracy linked individual images with the camera that took them.

There’s one important aspect of this fingerprint that the article did not talk about: how easy is it to forge? Can someone analyze 100 images from a given camera, and then doctor a pre-existing picture so that it appeared to come from that camera?

My guess is that it can be done relatively easily.

Posted on April 25, 2006 at 2:09 PM68 Comments

RFID Cards and Man-in-the-Middle Attacks

Recent articles about a proposed US-Canada and US-Mexico travel document (kind of like a passport, but less useful), with an embedded RFID chip that can be read up to 25 feet away, have once again made RFID security newsworthy.

My views have not changed. The most secure solution is a smart card that only works in contact with a reader; RFID is much more risky. But if we’re stuck with RFID, the combination of shielding for the chip, basic access control security measures, and some positive action by the user to get the chip to operate is a good one. The devil is in the details, of course, but those are good starting points.

And when you start proposing chips with a 25-foot read range, you need to worry about man-in-the-middle attacks. An attacker could potentially impersonate the card of a nearby person to an official reader, just by relaying messages to and from that nearby person’s card.

Here’s how the attack would work. In this scenario, customs Agent Alice has the official card reader. Bob is the innocent traveler, in line at some border crossing. Mallory is the malicious attacker, ahead of Bob in line at the same border crossing, who is going to impersonate Bob to Alice. Mallory’s equipment includes an RFID reader and transmitter.

Assume that the card has to be activated in some way. Maybe the cover has to be opened, or the card taken out of a sleeve. Maybe the card has a button to push in order to activate it. Also assume the card has come challenge-reply security protocol and an encrypted key exchange protocol of some sort.

  1. Alice’s reader sends a message to Mallory’s RFID chip.
  2. Mallory’s reader/transmitter receives the message, and rebroadcasts it to Bob’s chip.
  3. Bob’s chip responds normally to a valid message from Alice’s reader. He has no way of knowing that Mallory relayed the message.
  4. Mallory’s reader transmitter receives Bob’s message and rebroadcasts it to Alice. Alice has no way of knowing that the message was relayed.
  5. Mallory continues to relay messages back and forth between Alice and Bob.

Defending against this attack is hard. (I talk more about the attack in Applied Cryptography, Second Edition, page 109.) Time stamps don’t help. Encryption doesn’t help. It works because Mallory is simply acting as an amplifier. Mallory might not be able to read the messages. He might not even know who Bob is. But he doesn’t care. All he knows is that Alice thinks he’s Bob.

Precise timing can catch this attack, because of the extra delay that Mallory’s relay introduces. But I don’t think this is part of the spec.

The attack can be easily countered if Alice looks at Mallory’s card and compares the information printed on it with what she’s receiving over the RFID link. But near as I can tell, the point of the 25-foot read distance is so cards can be authenticated in bulk, from a distance.

From the News.com article:

Homeland Security has said, in a government procurement notice posted in September, that “read ranges shall extend to a minimum of 25 feet” in RFID-equipped identification cards used for border crossings. For people crossing on a bus, the proposal says, “the solution must sense up to 55 tokens.”

If Mallory is on that bus, he can impersonate any nearby Bob who activates his RFID card early. And at a crowded border crossing, the odds of some Bob doing that are pretty good.

More detail here:

If that were done, the PASS system would automatically screen the cardbearers against criminal watch lists and put the information on the border guard’s screen by the time the vehicle got to the station, Williams said.

And would predispose the guard to think that everything’s okay, even if it isn’t.

I don’t think people are thinking this one through.

Posted on April 25, 2006 at 7:32 AM58 Comments

Microsoft Vista's Endless Security Warnings

Paul Thurrott has posted an excellent essay on the problems with Windows Vista. Most interesting to me is how they implement UAP (User Account Protection):

Modern operating systems like Linux and Mac OS X operate under a security model where even administrative users don’t get full access to certain features unless they provide an in-place logon before performing any task that might harm the system. This type of security model protects users from themselves, and it is something that Microsoft should have added to Windows years and years ago.

Here’s the good news. In Windows Vista, Microsoft is indeed moving to this kind of security model. The feature is called User Account Protection (UAP) and, as you might expect, it prevents even administrative users from performing potentially dangerous tasks without first providing security credentials, thus ensuring that the user understands what they’re doing before making a critical mistake. It sounds like a good system. But this is Microsoft, we’re talking about here. They completely botched UAP.

The bad news, then, is that UAP is a sad, sad joke. It’s the most annoying feature that Microsoft has ever added to any software product, and yes, that includes that ridiculous Clippy character from older Office versions. The problem with UAP is that it throws up an unbelievable number of warning dialogs for even the simplest of tasks. That these dialogs pop up repeatedly for the same action would be comical if it weren’t so amazingly frustrating. It would be hilarious if it weren’t going to affect hundreds of millions of people in a few short months. It is, in fact, almost criminal in its insidiousness.

Let’s look a typical example. One of the first things I do whenever I install a new Windows version is download and install Mozilla Firefox. If we forget, for a moment, the number of warning dialogs we get during the download and install process (including a brazen security warning from Windows Firewall for which Microsoft should be chastised), let’s just examine one crucial, often overlooked issue. Once Firefox is installed, there are two icons on my Desktop I’d like to remove: The Setup application itself and a shortcut to Firefox. So I select both icons and drag them to the Recycle Bin. Simple, right?

Wrong. Here’s what you have to go through to actually delete those files in Windows Vista. First, you get a File Access Denied dialog (Figure) explaining that you don’t, in fact, have permission to delete a … shortcut?? To an application you just installed??? Seriously?

OK, fine. You can click a Continue button to “complete this operation.” But that doesn’t complete anything. It just clears the desktop for the next dialog, which is a Windows Security window (Figure). Here, you need to give your permission to continue something opaquely called a “File Operation.” Click Allow, and you’re done. Hey, that’s not too bad, right? Just two dialogs to read, understand, and then respond correctly to. What’s the big deal?

What if you’re doing something a bit more complicated? Well, lucky you, the dialogs stack right up, one after the other, in a seemingly never-ending display of stupidity. Indeed, sometimes you’ll find yourself unable to do certain things for no good reason, and you click Allow buttons until you’re blue in the face. It will never stop bothering you, unless you agree to stop your silliness and leave that file on the desktop where it belongs. Mark my words, this will happen to you. And you will hate it.

The problem with lots of warning dialog boxes is that they don’t provide security. Users stop reading them. They think of them as annoyances, as an extra click required to get a feature to work. Clicking through gets embedded into muscle memory, and when it actually matters the user won’t even realize it.

Jeff Atwood says the same thing:

The problem with the Security Through Endless Warning Dialogs school of thought is that it doesn’t work. All those earnest warning dialogs eventually blend together into a giant “click here to get work done” button that nobody bothers to read any more. The operating system cries wolf so much that when a real wolf—in the form of a virus or malware—rolls around, you’ll mindlessly allow it access to whatever it wants, just out of habit.

So does Rick Strahl:

Then there are the security dialogs. Ah yes, now we’re making progress: Ask users on EVERY program you launch that isn’t signed whether they want to elevate permissions. Uh huh, this is going to work REAL WELL. We know how well that worked with unsigned ActiveX controls in Internet Explorer ­ so well that even Microsoft isn’t signing most of its own ActiveX controls. Give too many warnings that are not quite reasonable and people will never read the dialogs and just click them anyway… I know I started doing that in the short use I’ve had on Vista.

These dialog boxes are not security for the user, they’re CYA security from the user. When some piece of malware trashes your system, Microsoft can say: “You gave the program permission to do that; it’s not our fault.”

Warning dialog boxes are only effective if the user has the ability to make intelligent decisions about the warnings. If the user cannot do that, they’re just annoyances. And they’re annoyances that don’t improve security.

EDITED TO ADD (5/8): Commentary.

Posted on April 24, 2006 at 1:43 PM101 Comments

Mafia Boss Secures His Data with Caesar Cipher

Odd story:

At least one coded note, published in the Web site’s biography, has a strong resemblance to what’s known as Caesar cipher, an encryption scheme used by Julius Caesar to protect important military messages.

The letter, written in January 2001 by Angelo Provenzano to his father, was found with other documents when one of Provenzano’s men, Nicola La Barbera, was arrested.

“…I met 512151522 191212154 and we agreed that we will see each other after the holidays…,” said the letter, which included several other cryptograms.

“The Binnu code is nothing new: each number corresponds to a letter of the alphabet. ‘A’ is 4, ‘B’ is 5, ‘C’ is 6 and so on until the letter Z , which corresponds to number 24,” wrote Palazzolo and Oliva.

I got a nice quote:

“Looks like kindergarten cryptography to me. It will keep your kid sister out, but it won’t keep the police out. But what do you expect from someone who is computer illiterate?” security guru Bruce Schneier, author of several books on cryptography, told Discovery News.

Posted on April 24, 2006 at 6:52 AM51 Comments

Movie Plot Threat Contest: Status Report

On the first of this month, I announced my (possibly First) Movie-Plot Threat Contest.

Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.

Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.

Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.

As of this morning, the blog post has 580 comments. I expected a lot of submissions, but the response has blown me away.

Looking over the different terrorist plots, they seem to fall into several broad categories. The first category consists of attacks against our infrastructure: the food supply, the water supply, the power infrastructure, the telephone system, etc. The idea is to cripple the country by targeting one of the basic systems that make it work.

The second category consists of big-ticket plots. Either they have very public targets—blowing up the Super Bowl, the Oscars, etc.—or they have high-tech components: nuclear waste, anthrax, chlorine gas, a full oil tanker, etc. And they are often complex and hard to pull off. This is the 9/11 idea: a single huge event that affects the entire nation.

The third category consists of low-tech attacks that go on and on. Several people imagined a version of the DC sniper scenario, but with multiple teams. The teams would slowly move around the country, perhaps each team starting up after the previous one was captured or killed. Other people suggested a variant of this with small bombs in random public locations around the country.

(There’s a fourth category: actual movie plots. Some entries are comical, unrealistic, have science fiction premises, etc. I’m not even considering those.)

The better ideas tap directly into public fears. In my book, Beyond Fear, I discusse five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

  1. People exaggerate spectacular but rare risks and downplay common risks.
  2. People have trouble estimating risks for anything not exactly like their normal situation.
  3. Personified risks are perceived to be greater than anonymous risks.
  4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.
  5. People overestimate risks that are being talked about and remain an object of public scrutiny.

The best plot ideas leverage one or more of those tendencies. Big-ticket attacks leverage the first. Infrastructure and low-tech attacks leverage the fourth. And every attack tries to leverage the fifth, especially those attacks that go on and on. I’m willing to bet that when I find a winner, it will be the plot that leverages the greatest number of those tendencies to the best possible advantage.

I also got a bunch of e-mails from people with ideas they thought too terrifying to post publicly. Some of them wouldn’t even tell them to me. I also received e-mails from people accusing me of helping the terrorists by giving them ideas.

But if there’s one thing this contest demonstrates, it’s that good terrorist ideas are a dime a dozen. Anyone can figure out how to cause terror. The hard part is execution.

Some of the submitted plots require minimal skill and equipment. Twenty guys with cars and guns—that sort of thing. Reading through them, you have to wonder why there have been no terrorist attacks in the U.S. since 9/11. I don’t believe the “flypaper theory,” that the terrorists are all in Iraq instead of in the U.S. And despite all the ineffectual security we’ve put in place since 9/11, I’m sure we have had some successes in intelligence and investigation—and have made it harder for terrorists to operate both in the U.S. and abroad.

But mostly, I think terrorist attacks are much harder than most of us think. It’s harder to find willing recruits than we think. It’s harder to coordinate plans. It’s harder to execute those plans. Terrorism is rare, and for all we’ve heard about 9/11 changing the world, it’s still rare.

The submission deadline is the end of this month, so there’s still time to submit your entry. And please read through some of the others and comment on them; I’m curious as to what other people think are the most interesting, compelling, realistic, or effective scenarios.

EDITED TO ADD (4/23): The contest made The New York Times.

Posted on April 22, 2006 at 10:14 AM76 Comments

Software Failure Causes Airport Evacuation

Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts “test” bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta’s Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen. Normally the software flashes the words “This is a test” on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a “suspicious device,” according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn’t find it, of course, but they evacuated the airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country’s busiest passenger airport. It’s Delta’s hub city. The delays were felt across the country for the rest of the day.

Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn’t fail—everyone did what they were supposed to do.

What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure—in this case, a software glitch in a single X-ray machine—cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates—I’ve seen this design at several European airports—so that when there’s a problem the effects are restricted to that gate.

Of course, this distributed security solution would be more expensive. But I’m willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.

Posted on April 21, 2006 at 12:49 PM30 Comments

The Kryptos Sculpture

The Kryptos Sculpture is located in the center of the CIA Headquarters in Langley, VA. It was designed in 1990, and contains a four-part encrypted puzzle. The first three parts have been solved, but now we’ve learned that the second-part solution was wrong and here’s the corrected solution.

The fourth part remains unsolved. Wired wrote:

Sanborn has said that clues to the last section, which has only 97 letters, are contained in previously deciphered parts. Therefore getting those first three sections correct has been crucial.

Posted on April 21, 2006 at 7:54 AM20 Comments

Terrorist Travel Advisory

From the Pittsburgh Post-Gazette:

My son and I woke up Sunday morning and drove a rented truck to New York City to move his worldly goods into an apartment there. As we made it to the Holland Tunnel, after traveling the Tony Soprano portion of the Jersey Turnpike with a blue moon in our eyes, the woman in the toll booth informed us that, since 9/11, trucks were not allowed in the tunnel; we’d have to use the Lincoln Tunnel, she said. So if you are a terrorist trying to get into New York from Jersey, be advised that you’re going to have to use the Lincoln Tunnel.

Posted on April 20, 2006 at 12:09 PM46 Comments

Identity-Theft Disclosure Laws

California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.

Except that it won’t do the same thing: The federal bill has become so watered down that it won’t be very effective. I would still be in favor of it—a poor federal law is better than none—if it didn’t also pre-empt more-effective state laws, which makes it a net loss.

Identity theft is the fastest-growing area of crime. It’s badly named—your identity is the one thing that cannot be stolen—and is better thought of as fraud by impersonation. A criminal collects enough personal information about you to be able to impersonate you to banks, credit card companies, brokerage houses, etc. Posing as you, he steals your money, or takes a destructive joyride on your good credit.

Many companies keep large databases of personal data that is useful to these fraudsters. But because the companies don’t shoulder the cost of the fraud, they’re not economically motivated to secure those databases very well. In fact, if your personal data is stolen from their databases, they would much rather not even tell you: Why deal with the bad publicity?

Disclosure laws force companies to make these security breaches public. This is a good idea for three reasons. One, it is good security practice to notify potential identity theft victims that their personal information has been lost or stolen. Two, statistics on actual data thefts are valuable for research purposes. And three, the potential cost of the notification and the associated bad publicity naturally leads companies to spend more money on protecting personal information—or to refrain from collecting it in the first place.

Think of it as public shaming. Companies will spend money to avoid the PR costs of this shaming, and security will improve. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

This public shaming needs the cooperation of the press and, unfortunately, there’s an attenuation effect going on. The first major breach after California passed its disclosure law—SB1386—was in February 2005, when ChoicePoint sold personal data on 145,000 people to criminals. The event was all over the news, and ChoicePoint was shamed into improving its security.

Then LexisNexis exposed personal data on 300,000 individuals. And Citigroup lost data on 3.9 million individuals. SB1386 worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. After a while, it was no longer news. And when the press stopped reporting, the “cost” of these breaches to the companies declined.

Today, the only real cost that remains is the cost of notifying customers and issuing replacement cards. It costs banks about $10 to issue a new card, and that’s money they would much rather not have to spend. This is the agenda they brought to the federal bill, cleverly titled the Data Accountability and Trust Act, or DATA.

Lobbyists attacked the legislation in two ways. First, they went after the definition of personal information. Only the exposure of very specific information requires disclosure. For example, the theft of a database that contained people’s first initial, middle name, last name, Social Security number, bank account number, address, phone number, date of birth, mother’s maiden name and password would not have to be disclosed, because “personal information” is defined as “an individual’s first and last name in combination with …” certain other personal data.

Second, lobbyists went after the definition of “breach of security.” The latest version of the bill reads: “The term ‘breach of security’ means the unauthorized acquisition of data in electronic form containing personal information that establishes a reasonable basis to conclude that there is a significant risk of identity theft to the individuals to whom the personal information relates.”

Get that? If a company loses a backup tape containing millions of individuals’ personal information, it doesn’t have to disclose if it believes there is no “significant risk of identity theft.” If it leaves a database exposed, and has absolutely no audit logs of who accessed that database, it could claim it has no “reasonable basis” to conclude there is a significant risk. Actually, the company could point to a study that showed the probability of fraud to someone who has been the victim of this kind of data loss to be less than 1 in 1,000—which is not a “significant risk”—and then not disclose the data breach at all.

Even worse, this federal law pre-empts the 23 existing state laws—and others being considered—many of which contain stronger individual protections. So while DATA might look like a law protecting consumers nationwide, it is actually a law protecting companies with large databases from state laws protecting consumers.

So in its current form, this legislation would make things worse, not better.

Of course, things are in flux. They’re always in flux. The language of the bill has changed regularly over the past year, as various committees got their hands on it. There’s also another bill, HR3997, which is even worse. And even if something passes, it has to be reconciled with whatever the Senate passes, and then voted on again. So no one really knows what the final language will look like.

But the devil is in the details, and the only way to protect us from lobbyists tinkering with the details is to ensure that the federal bill does not pre-empt any state bills: that the federal law is a minimum, but that states can require more.

That said, disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is so common is that the data is so valuable. The way to mitigate the risk of fraud due to impersonation is not to make personal information harder to steal, it’s to make it harder to use.

Disclosure laws only deal with the economic externality of data brokers protecting your personal information. What we really need are laws prohibiting credit card companies and other financial institutions from granting credit to someone using your name with only a minimum of authentication.

But until that happens, we can at least hope that Congress will refrain from passing bad bills that override good state laws—and helping criminals in the process.

This essay originally appeared on Wired.com.

EDITED TO ADD (4/20): Here’s a comparison of state disclosure laws.

Posted on April 20, 2006 at 8:11 AM34 Comments

DHS Releases RFP for Secure Border Initiative

The Department of Homeland Security has released a Request for Proposal—that’s the document asking industry if anyone can do what it wants—for the Secure Border Initiative. Washington Technology has the story:

The long-awaited request for proposals for Secure Border Initiative-Net was released today by the Homeland Security Department, which is calling the project the “most comprehensive effort in the nation’s history” to gain control of the borders.

The 144-page document outlines the purpose and scope of the border surveillance technology program, which supplements other efforts to control the border and enforce immigration laws.

Posted on April 19, 2006 at 7:12 AM54 Comments

Graffiti on Air Force One?

Here’s a video of a bunch of graffiti artists breaching security at Andrew’s Air Force Base, and tagging an Air Force One plane.

I know there are multiple planes—four, I think—and that they are in different states of active service at any one time. And, presumably, the different planes have different security levels depending on their status. Still, part of me thinks this is a hoax.

One, this is the sort of stunt that can get you shot at. And two, posting a video of this can get you arrested.

Anyone know anything about this?

EDITED TO ADD (4/21): It’s a hoax.

Posted on April 18, 2006 at 1:10 PM75 Comments

Deniable File System

Some years ago I did some design work on something I called a Deniable File System. The basic idea was the fact that the existence of ciphertext can in itself be incriminating, regardless of whether or not anyone can decrypt it. I wanted to create a file system that was deniable: where encrypted files looked like random noise, and where it was impossible to prove either the existence or non-existence of encrypted files.

This turns out to be a very hard problem for a whole lot of reasons, and I never pursued the project. But I just discovered a file system that seems to meet all of my design criteria—Rubberhose:

Rubberhose transparently and deniably encrypts disk data, minimising the effectiveness of warrants, coersive interrogations and other compulsive mechanims, such as U.K RIP legislation. Rubberhose differs from conventional disk encryption systems in that it has an advanced modular architecture, self-test suite, is more secure, portable, utilises information hiding (steganography / deniable cryptography), works with any file system and has source freely available.

The devil really is in the details with something like this, and I would hesitate to use this in places where it really matters without some extensive review. But I’m pleased to see that someone is working on this problem.

Next request: A deniable file system that fits on a USB token, and leaves no trace on the machine it’s plugged into.

Posted on April 18, 2006 at 7:17 AM92 Comments

Man Diverts Mail to Himself

Someone filed change-of-address forms with the post office to divert other peoples’ mail to himself. 170 times.

Postal Service spokeswoman Patricia Licata said a credit card is required for security reasons. “We have systems in place to prevent this type of occurrence,” she said, but declined further comment on the specific case until officials have time to analyze what happened.

Sounds like those systems don’t work very well.

Posted on April 17, 2006 at 12:02 PM26 Comments

Triple-DES Upgrade Adding Insecurities?

It’s a provocative headline: “Triple DES Upgrades May Introduce New ATM Vulnerabilities.” Basically, at the same time they’re upgrading their encryption to triple-DES, they’re also moving the communications links from dedicated lines to the Internet. And while the protocol encrypts PINs, it doesn’t encrypt any of the other information, such as card numbers and expiration dates.

So it’s the move from dedicated lines to the Internet that’s adding the insecurities.

Posted on April 17, 2006 at 6:48 AM29 Comments

AT&T Assisting NSA Surveillance

Interesting details emerging from EFF’s lawsuit:

According to a statement released by Klein’s attorney, an NSA agent showed up at the San Francisco switching center in 2002 to interview a management-level technician for a special job. In January 2003, Klein observed a new room being built adjacent to the room housing AT&T’s #4ESS switching equipment, which is responsible for routing long distance and international calls.

“I learned that the person whom the NSA interviewed for the secret job was the person working to install equipment in this room,” Klein wrote. “The regular technician work force was not allowed in the room.”

Klein’s job eventually included connecting internet circuits to a splitting cabinet that led to the secret room. During the course of that work, he learned from a co-worker that similar cabinets were being installed in other cities, including Seattle, San Jose, Los Angeles and San Diego.

“While doing my job, I learned that fiber optic cables from the secret room were tapping into the Worldnet (AT&T’s internet service) circuits by splitting off a portion of the light signal,” Klein wrote.

The split circuits included traffic from peering links connecting to other internet backbone providers, meaning that AT&T was also diverting traffic routed from its network to or from other domestic and international providers, according to Klein’s statement.

The secret room also included data-mining equipment called a Narus STA 6400, “known to be used particularly by government intelligence agencies because of its ability to sift through large amounts of data looking for preprogrammed targets,” according to Klein’s statement.

Narus, whose website touts AT&T as a client, sells software to help internet service providers and telecoms monitor and manage their networks, look for intrusions, and wiretap phone calls as mandated by federal law.

More about what the Narus box can do.

EDITED TO ADD (4/14): More about Narus.

Posted on April 14, 2006 at 7:58 AM50 Comments

Social Engineering a Police Officer

Really nice social engineering example. Note his repeated efforts to ensure that if he’s stopped again, he can rely on the cop to vouch for him.

Smooth-talking escapee evades police

Woe is Carl Bordelon, a police officer for the town of Ball, La. His dashboard camera captured (below) his questioning of Richard Lee McNair, 47, on Wednesday. Earlier that same day, McNair had escaped from a federal penitentiary at nearby Pollock, La., reportedly hiding in a prison warehouse and sneaking out in a mail van. Bordelon, on the lookout, stopped McNair when he saw him running along some railroad tracks. What follows is a chillingly fascinating performance from McNair, who manages to remain fairly smooth and matter-of-fact while tripping up Bordelon. The officer notices that the guy matches the description of McNair—who was serving a life sentence for killing a trucker at a grain elevator in Minot, N.D., in 1987—observes that he looked like he’d “been through a briar patch” and had to wonder why he would choose appalling heat (at least according to that temperature gauge in the police car) to go running, without any identification, on a dubious 12-mile run. But he doesn’t notice when McNair changes his story—he gives two different names (listen for it)—and eventually, Bordelon bids him farewell, saying: “Be careful, buddy.” McNair remains on the loose. (Note: Video is more than eight minutes long but worth it.)

Posted on April 13, 2006 at 7:03 AM49 Comments

What if Your Vendor Won't Sell You a Security Upgrade?

Good question:

More frightening than my experience is the possibility that the company might do this to an existing customer. What good is a security product if the vendor refuses to sell you service on it? Without updates, most of these products are barely useful as doorstops.

The article demonstrates that a vendor might refuse to sell you a product, for reasons you can’t understand. And that you might not get any warning of that fact. The moral is that you’re not only buying a security product, you’re buying a security company.

In our tests, we look at products, not companies. Things such as training, finances and corporate style don’t come into it. But when it comes to buying products, our tests aren’t enough. It’s important to investigate all those peripheral aspects of the vendor before you sign a purchase order. I was reminded of that the hard way.

Posted on April 12, 2006 at 12:40 PM30 Comments

Military Secrets for Sale in Afghanistan

Stolen goods are being sold in the markets, including hard drives filled with classified data.

A reporter recently obtained several drives at the bazaar that contained documents marked “Secret.” The contents included documents that were potentially embarrassing to Pakistan, a U.S. ally, presentations that named suspected militants targeted for “kill or capture” and discussions of U.S. efforts to “remove” or “marginalize” Afghan government officials whom the military considered “problem makers.”

The drives also included deployment rosters and other documents that identified nearly 700 U.S. service members and their Social Security numbers, information that identity thieves could use to open credit card accounts in soldiers’ names.

EDITED TO ADD (4/12): NPR story.

Posted on April 12, 2006 at 6:25 AM34 Comments

Air Force One Security Leak

Last week the San Francisco Chronicle broke the story that Air Force One’s defenses were exposed on a public Internet site:

Thus, the Air Force reacted with alarm last week after The Chronicle told the Secret Service that a government document containing specific information about the anti-missile defenses on Air Force One and detailed interior maps of the two planes—including the location of Secret Service agents within the planes—was posted on the Web site of an Air Force base.

The document also shows the location where a terrorist armed with a high-caliber sniper rifle could detonate the tanks that supply oxygen to Air Force One’s medical facility.

And a few days later:

Air Force and Pentagon officials scrambled Monday to remove highly sensitive security details about the two Air Force One jetliners after The Chronicle reported that the information had been posted on a public Web site.

The security information—contained in a “technical order”—is used by rescue crews in the event of an emergency aboard various Air Force planes. But this order included details about Air Force One’s anti-missile systems, the location of Secret Service personnel within the aircraft and information on other vulnerabilities that terrorists or a hostile military force could exploit to try to damage or destroy Air Force One, the president’s air carrier.

“We are dealing with literally hundreds of thousands of Web pages, and Web pages are reviewed on a regular basis, but every once in a while something falls through the cracks,” Air Force spokeswoman Lt. Col. Catherine Reardon told The Chronicle.

“We can’t even justify how (the technical order) got out there. It should have been password-protected. We regret it happened. We removed it, and we will look more closely in the future.”

Turns out that this story involves a whole lot more hype than actual security.

The document Caffera found is part of the Air Force’s Technical Order 00-105E-9 – Aerospace Emergency Rescue and Mishap Response Information (Emergency Services) Revision 11. It resided, until recently, on the web site of the Air Logistics Center at Warner Robins Air Force Base. The purpose is pretty straight-ahead: “Recent technological advances in aviation have caused concern for the modern firefighter.” So the document gives “aircraft hazards, cabin configurations, airframe materials, and any other information that would be helpful in fighting fires.”

As a February 2006 briefing from the Air Force Civil Engineer Support Agency, explains that the document is “used by foreign governments or international organizations and is cleared to share this information with the general global public…distribution is unlimited.” The Technical Order existed solely on paper from 1970 to mid-1996, when the Secretary of the Air Force directed that henceforth all technical orders be distributed electronically (for a savings of $270,000 a year). The first CD-ROMs were distributed in January 1999 and the web site at Warner Robins was set up 10 months later. A month after that, the web site became the only place to access the documents, which are routinely updated to reflect changes in aircraft or new regulations.

But back to the document Caffera found. It’s hardly a secret that Air Force One has defenses against surface-to-air missiles. The page that so troubled Caffera indicates that the plane employs infrared countermeasures, with radiating units positioned on the tail and next to or on all four engine pylons. Why does the document provide that level of detail? Because emergency responders could be injured if they walk within a certain radius of one of the IR units while it is operating.

Nor is it remarkable that Secret Service agents would sit in areas on the plane that are close to the President’s suite, as well as between reporters, who are known to sit in the back of the plane, and everyone else. Exactly how this information endangers anyone is unclear. But it would help emergency responders in figuring out where to look for people in the event of an accident. (Interestingly, conjectural drawings of the layout of Air Force One like this one are pretty close to the real deal.)

As for hitting the medical oxygen tanks to destroy the plane, you’d have to be really, really lucky to do that while the plane is moving at any significant speed. And if it’s standing still and you are after the President and armed with a high-caliber sniper rifle, why wouldn’t you target him directly? Besides, if you wanted to make the plane explode, it would be much easier to aim for the fuel tanks in the wings (which when fully-loaded hold 53,611 gallons). Terrorists don’t need a diagram to figure that out. But a rescuer would want this information so that the oxygen valves could be turned off to mitigate the risk of a fire or explosion.

[…]

An Air Force source familiar with the history and purpose of the documents who asked not to be identified laughed when told of the above quote, reiterated that the Technical Order is and always has been unclassified, and said it is unclear how the document can be distributed now, adding that firefighters in particular won’t like any changes that make their jobs more difficult or dangerous.

“The order came down this afternoon [Monday] to remove this particular technical order from the public Web site,’ said John Birdsong, chief of media relations at Warner Robins Air Logistics Center, the air base in Georgia that had originally posted the order on its publicly accessible Web site.

According to Birdsong, the directive to remove the document came from a number of officials, including Dan McGarvey, the chief of information security for the Air Force at the Pentagon.”

Muddying things still further are comments from Jean Schaefer, deputy chief of public affairs for the Secretary of the Air Force. “We have very clear policies of what should be on the Web,” she said. “We need to emphasize the policy to the field. It appears that this document shouldn’t have been on the Web, and we have pulled the document in question. Our policy is clear in that documents that could make our operations vulnerable or threaten the safety of our people should not be available on the Web.”

And now, apparently, neither should documents that help ensure the safety of our pilots, aircrews, firefighters and emergency responders.

Another news report.

Some blogs criticized the San Francisco Chronicle for publishing this, because it gives the terrorists more information. I think they should be criticized for publishing this, because there’s no story here.

EDITED TO ADD (4/11): Much of the document is here.

Posted on April 11, 2006 at 2:40 PM28 Comments

Terrorism Risks of Google Earth

Sometimes I wonder about “security experts.” Here’s one who thinks Google Earth is a terrorism risk because it allows people to learn the GPS coordinates of soccer stadiums. (English blog entry on the topic here.)

Basically, Klaus Dieter Matschke is worried because Google Earth provides the location of buildings within 20 meters, whereas before coordinates had an error range of one kilometer. He’s worried that this information will provide terrorists with the exact target coordinates for missile attacks.

I have no idea how anyone could print this drivel. Anyone can attend a football game with a GPS receiver in his pocket and get the coordinates down to one meter. Or buy a map.

Google Earth is not the problem; the problem is the availability of short-range missiles on the black market.

Posted on April 11, 2006 at 6:52 AM79 Comments

KittenAuth

You’ve all seen CAPTCHAs. Those are those distorted pictures of letters and numbers you sometimes see on web forms. The idea is that it’s hard for computers to identify the characters, but easy for people to do. The goal of CAPTCHAs is to authenticate that there’s a person sitting in front of the computer.

KittenAuth works with images. The system shows you nine pictures of cute little animals, and the person authenticates himself by clicking on the three kittens. A computer clicking at random has only a 1 in 84 chance of guessing correctly.

Of course you could increase the security by adding more images or requiring the person to choose more images. Another worry—which I didn’t see mentioned—is that the computer could brute-force a static database. If there are only a small fixed number of actual kittens, the computer could be told—by a person—that they’re kittens. Then, the computer would know that whenever it sees that image it’s a kitten.

Still, it’s an interesting idea that warrants more research.

Posted on April 10, 2006 at 1:19 PM82 Comments

No-Buy List

You’ve all heard of the “No Fly List.” Did you know that there’s a “No-Buy List” as well?

The so-called “Bad Guy List” is hardly a secret. The U.S. Treasury’s Office of Foreign Assets Control maintains its “Specially Designated Nationals and Blocked Persons List” to be easily accessible on its public Web site.

Wanna see it? Sure you do. Just key OFAC into your Web browser, and you’ll find the 224-page document of the names of individuals, organizations, corporations and Web sites the feds suspect of terrorist or criminal activities and associations.

You might think Osama bin Laden should be at the top of The List, but it’s alphabetized, so Public Enemy No. 1 is on Page 59 with a string of akas and spelling derivations filling most of the first column. If you’re the brother, daughter, son or sister-in-law of Yugoslavian ex-president Slobodan Milosevic (who died in custody recently), you’re named, too, so probably forget about picking up that lovely new Humvee on this side of the Atlantic. Same for Charles “Chuckie” Taylor, son of the recently arrested former president of Liberia (along with the deposed prez’s wife and ex-wife).

The Bad Guy List’s relevance to the average American consumer? What’s not widely known about it is that by federal law, sellers are supposed to check it even in the most common and mundane marketplace transactions.

“The OFAC requirements apply to all U.S. citizens. The law prohibits anyone, not just car dealers, from doing business with anyone whose name appears on the Office of Foreign Assets Control’s Specially Designated Nationals list,” says Thomas B. Hudson, senior partner at Hudson Cook LLP, a law firm in Hanover, Md., and publisher of Carlaw and Spot Delivery, legal-compliance newsletters and services for car dealers and finance companies.

Hudson says that, according to the law, supermarkets, restaurants, pawnbrokers, real estate agents, everyone, even The Washington Post, is prohibited from doing business with anyone named on the list. “There is no minimum amount for the transactions covered by the OFAC requirement, so everyone The Post sells a paper to or a want ad to whose name appears on the SDN list is a violation,” says Hudson, whose new book, “Carlaw—A Southern Attorney Delivers Humorous Practical Legal Advice on Car Sales and Financing,” comes out this month. “The law applies to you personally, as well.”

But The Bad Guy List law (which predates the controversial Patriot Act) not only is “perfectly ridiculous,” it’s impractical, says Hudson. “I understand that 95 percent of the people whose names are on the list are not even in the United States. And if you were a bad guy planning bad acts, and you knew that your name was on a publicly available list that people were required to check in order to avoid violating the law, how dumb would you have to be to use your own name?”

Compliance is also a big problem. Think eBay sellers are checking the list for auction winners? Or that the supermarket checkout person is thanking you by name while scanning a copy of The List under the counter? Not likely.

Posted on April 10, 2006 at 6:23 AM35 Comments

Man Detained for Singing a Clash Song

I was going to ignore this one, but too many people sent it to me.

I was in New York yesterday, and I saw a sign at the entrance to the Midtown Tunnel that said: “See something? Say something.” The problem with a nation of amateur spies is that it results in these sorts of results. “I know he’s a terrorist because he’s dressing funny and he always has white wires hanging out of his pocket.” “They all talk in a funny language, and their cooking smells bad.”

Amateur spies perform amateur spying. If everybody does it, the false alarms will overwhelm the police.

Posted on April 7, 2006 at 11:31 AM45 Comments

The "I'm Not the Criminal You're Looking For" Card

This is a great idea:

Lawmakers in Iowa are proposing a special “passport” meant to protect victims of identity theft against false criminal action and credit charges.

The “Identity Theft Passport” will be a card or certificate that victims of identity fraud can show to police or creditors to help demonstrate their innocence, Tom Sands, a state representative of the Iowa House and supporter of the proposal, said in an e-mail interview Tuesday.

I wrote about something similar in Beyond Fear:

In Singapore, some names are so common that the police issue He’s-not-the-guy-we’re-looking-for documents exonerating innocent people with the same names as wanted criminals.

EDITED TO ADD (4/7): Of course it will be forged; all documents are forged. And yes, I’ve recently written that documents are hard to verify. This is a still good idea, even though it’s not perfect.

Posted on April 6, 2006 at 1:13 PM50 Comments

VOIP Encryption

There are basically four ways to eavesdrop on a telephone call.

One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it’s the easiest. While it doesn’t work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.

Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line’s path—even outside the home. This used to be the way the police eavesdropped on your phone line. These days it’s probably most often used by criminals. This method doesn’t work for cell phones, either.

Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.

Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It’s hard to eavesdrop on one particular person this way, but it’s easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They’ve even been known to use submarines to tap undersea phone cables.

That’s basically the entire threat model for traditional phone calls. And when most people think about IP telephony—voice over internet protocol, or VOIP—that’s the threat model they probably have in their heads.

Unfortunately, phone calls from your computer are fundamentally different from phone calls from your telephone. Internet telephony’s threat model is much closer to the threat model for IP-networked computers than the threat model for telephony.

And we already know the threat model for IP. Data packets can be eavesdropped on anywhere along the transmission path. Data packets can be intercepted in the corporate network, by the internet service provider and along the backbone. They can be eavesdropped on by the people or organizations that own those computers, and they can be eavesdropped on by anyone who has successfully hacked into those computers. They can be vacuumed up by nosy hackers, criminals, competitors and governments.

It’s comparable to threat No. 3 above, but with the scope vastly expanded.

My greatest worry is the criminal attacks. We already have seen how clever criminals have become over the past several years at stealing account information and personal data. I can imagine them eavesdropping on attorneys, looking for information with which to blackmail people. I can imagine them eavesdropping on bankers, looking for inside information with which to make stock purchases. I can imagine them stealing account information, hijacking telephone calls, committing identity theft. On the business side, I can see them engaging in industrial espionage and stealing trade secrets. In short, I can imagine them doing all the things they could never have done with the traditional telephone network.

This is why encryption for VOIP is so important. VOIP calls are vulnerable to a variety of threats that traditional telephone calls are not. Encryption is one of the essential security technologies for computer data, and it will go a long way toward securing VOIP.

The last time this sort of thing came up, the U.S. government tried to sell us something called “key escrow.” Basically, the government likes the idea of everyone using encryption, as long as it has a copy of the key. This is an amazingly insecure idea for a number of reasons, mostly boiling down to the fact that when you provide a means of access into a security system, you greatly weaken its security.

A recent case in Greece demonstrated that perfectly: Criminals used a cell-phone eavesdropping mechanism already in place, designed for the police to listen in on phone calls. Had the call system been designed to be secure in the first place, there never would have been a backdoor for the criminals to exploit.

Fortunately, there are many VOIP-encryption products available. Skype has built-in encryption. Phil Zimmermann is releasing Zfone, an easy-to-use open-source product. There’s even a VOIP Security Alliance.

Encryption for IP telephony is important, but it’s not a panacea. Basically, it takes care of threats No. 2 through No. 4, but not threat No. 1. Unfortunately, that’s the biggest threat: eavesdropping at the end points. No amount of IP telephony encryption can prevent a Trojan or worm on your computer—or just a hacker who managed to get access to your machine—from eavesdropping on your phone calls, just as no amount of SSL or e-mail encryption can prevent a Trojan on your computer from eavesdropping—or even modifying—your data.

So, as always, it boils down to this: We need secure computers and secure operating systems even more than we need secure transmission.

This essay originally appeared on Wired.com.

Posted on April 6, 2006 at 5:09 AM63 Comments

Security Applications of Time-Reversed Acoustics

I simply don’t have the science to evaluate this claim:

Since conventional sound waves disperse when traveling through a medium, the possibility of focusing sound waves could have applications in several areas. In cryptography, for example, when sending a secret message, the sender could ensure that only one location would receive the message. Interceptors at other locations would only pick up noise due to unfocused waves. Other potential uses include antisubmarine warfare and underwater communications that benefit from targeted signaling.

Posted on April 5, 2006 at 1:06 PM36 Comments

Document Verification

According to The New York Times:

Undercover Congressional investigators successfully smuggled into the United States enough radioactive material to make two dirty bombs, even after it set off alarms on radiation detectors installed at border checkpoints, a new report says.

The reason is interesting:

The alarms went off in both locations, and the investigators were pulled aside for questioning. In both cases, they showed the agents from the Customs and Border Protection agency forged import licenses from the Nuclear Regulatory Commission, based on an image of the real document they found on the Internet.

The problem, the report says, is that the border agents have no routine way to confirm the validity of import licenses.

I’ve written about this problem before, and it’s one I think will get worse in the future. Verification systems are often the weakest link of authentication. Improving authentication tokens won’t improve security unless the verification systems improve as well.

Posted on April 5, 2006 at 8:43 AM35 Comments

Why Phishing Works

Interesting paper.

Abstract:

To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.

Here’s an article on the paper.

Posted on April 4, 2006 at 2:18 PM28 Comments

Security Screening for New York Helicopters

There’s a helicopter shuttle that runs from Lower Manhattan to Kennedy Airport. It’s basically a luxury item: for $139 you can avoid the drive to the airport. But, of course, security screeners are required for passengers, and that’s causing some concern:

At the request of U.S. Helicopter’s executives, the federal Transportation Security Administration set up a checkpoint, with X-ray and bomb-detection machines, to screen passengers and their luggage at the heliport.

The security agency is spending $560,000 this year to operate the checkpoint with a staff of eight screeners and is considering adding a checkpoint at the heliport at the east end of 34th Street. The agency’s involvement has drawn criticism from some elected officials.

“The bottom line here is that there are not enough screeners to go around,” said Senator Charles E. Schumer, Democrat of New York. “The fact that we are taking screeners that are needed at airports to satisfy a luxury market on the government’s dime is a problem.”

This is not a security problem; it’s an economics problem. And it’s a good illustration of the concept of “externalities.” An externality is an effect of a decision not borne by the decision-maker. In this example, U.S. Helicopter made a business decision to offer this service at a certain price. And customers will make a decision about whether or not the service is worth the money. But there is more to the cost than the $139. The cost of that checkpoint is an externality to both U.S. Helicopter and its customers, because the $560,000 spent on the security checkpoint is paid for by taxpayers. Taxpayers are effectively subsidizing the true cost of the helicopter trip.

The only way to solve this is for the government to bill the airline passengers for the cost of security screening. It wouldn’t be much per ticket, maybe $15. And it would be much less at major airports, because the economies of scale are so much greater.

The article even points out that customers would gladly pay the extra $15 because of another externality: the people who decide whether or not to take the helicopter trip are not the people actually paying for it.

Bobby Weiss, a self-employed stock trader and real estate broker who was U.S. Helicopter’s first paying customer yesterday, said he would pay $300 for a round trip to Kennedy, and he expected most corporate executives would, too.

“It’s $300, but so what? It goes on the expense account,” said Mr. Weiss, adding that he had no qualms about the diversion of federal resources to smooth the path of highfliers. “Maybe a richer guy may save a little time at the expense of a poorer guy who spends a little more time in line.”

What Mr. Weiss is saying is that the costs—both the direct cost and the cost of the security checkpoint—are externalities to him, so he really doesn’t care. Exactly.

Posted on April 4, 2006 at 7:51 AM50 Comments

Computer-Controlled Fasteners

It’s a really clever idea: bolts and latches that fasten and unfasten in response to remote computer commands.

What Rudduck developed are fasteners analogous to locks in doors, only in this case messages are sent electronically to engage the parts to lock or unlock. A quick electrical charge triggered remotely by a device or computer may move the part to lock, while another jolt disengages the unit.

Instead of nuts and bolts to hold two things together, these fasteners use hooks, latches and so-called smart materials that can change shape on command.The first commercial applications are intended for aircraft, allowing crews to quickly reshape interiors to maximize payload space. For long flights, the plane may need more high-cost business-class seats, while shorter hauls prefer a more abundant supply of coach seats.

Pretty clever, actually. The whole article is interesting.

But this part scares me:

A potential security breach threat apparently doesn’t exist.

“I wondered what’s to prevent some nut using a garage door opener from pushing the right buttons to make your airplane fall apart,” said Harrison. “But everything is locked down with codes, and the radio signals are scrambled, so this is fully secured against hackers.”

Clearly this Harrison guy knows nothing about computer security.

EDITED TO ADD: Slashdot has a thread on the topic.

Posted on April 3, 2006 at 12:57 PM43 Comments

GAO Homeland Security Reports

Last week the Government Accounting Office released three new reports on homeland security.

Posted on April 3, 2006 at 7:55 AM6 Comments

Announcing: Movie-Plot Threat Contest

NOTE: If you have a blog, please spread the word.

For a while now, I have been writing about our penchant for “movie-plot threats“: terrorist fears based on very specific attack scenarios. Terrorists with crop dusters, terrorists exploding baby carriages in subways, terrorists filling school buses with explosives—these are all movie-plot threats. They’re good for scaring people, but it’s just silly to build national security policy around them.

But if we’re going to worry about unlikely attacks, why can’t they be exciting and innovative ones? If Americans are going to be scared, shouldn’t they be scared of things that are really scary? “Blowing up the Super Bowl” is a movie plot to be sure, but it’s not a very good movie. Let’s kick this up a notch.

It is in this spirit I announce the (possibly First) Movie-Plot Threat Contest. Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.

Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.

Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.

Post your movie plots here on this blog.

Judging will be by me, swayed by popular acclaim in the blog comments section. The prize will be an autographed copy of Beyond Fear. And if I can swing it, a phone call with a real live movie producer.

Entries close at the end of the month—April 30—so Crypto-Gram readers can also play.

This is not an April Fool’s joke, although it’s in the spirit of the season. The purpose of this contest is absurd humor, but I hope it also makes a point. Terrorism is a real threat, but we’re not any safer through security measures that require us to correctly guess what the terrorists are going to do next.

Good luck.

EDITED TO ADD (4/4): There are hundreds of ideas here.

EDITED TO ADD (4/22): Update here.

Posted on April 1, 2006 at 9:35 AM972 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.