Blog: July 2008 Archives

Why You Should Never Talk to the Police

This is an engaging and fascinating video presentation by Professor James Duane of the Regent University School of Law, explaining why—in a criminal matter—you should never, ever, ever talk to the police or any other government agent. It doesn’t matter if you’re guilty or innocent, if you have an alibi or not—it isn’t possible for anything you say to help you, and it’s very possible that innocuous things you say will hurt you.

Definitely worth half an hour of your time.

And this is a video of Virginia Beach Police Department Officer George Bruch, who basically says that Duane is right.

Posted on July 31, 2008 at 12:52 PM120 Comments

TSA Proud of Confiscating Non-Dangerous Item

This is just sad. The TSA confiscated a battery pack not because it’s dangerous, but because other passengers might think it’s dangerous. And they’re proud of the fact.

“We must treat every suspicious item the same and utilize the tools we have available to make a final determination,” said Federal Security Director David Wynn. “Procedures are in place for a reason and this is a clear indication our workforce is doing a great job.”

My guess is that if Kip Hawley were allowed to comment on my blog, he would say something like this: “It’s not just bombs that are prohibited; it’s things that look like bombs. This looks enough like a bomb to fool the other passengers, and that in itself is a threat.”

Okay, that’s fair. But the average person doesn’t know what a bomb looks like; all he knows is what he sees on television and the movies. And this rule means that all homemade electronics are confiscated, because anything homemade with wires can look like a bomb to someone who doesn’t know better. The rule just doesn’t work.

And in today’s passengers-fight-back world, do you think anyone is going to successfully do anything with a fake bomb?

Posted on July 30, 2008 at 6:11 AM146 Comments

World War II Deception Story

Great security story from an obituary of former OSS agent Roger Hall:

One of his favorite OSS stories involved a colleague sent to occupied France to destroy a seemingly impenetrable German tank at a key crossroads. The French resistance found that grenades were no use.

The OSS man, fluent in German and dressed like a French peasant, walked up to the tank and yelled, “Mail!”

The lid opened, and in went two grenades.

Hall’s book about his OSS days, You’re Stepping on My Cloak and Dagger, is a must-read.

Posted on July 29, 2008 at 1:50 PM28 Comments

The DNS Vulnerability

Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com—there’s no way for you to tell the difference—and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability—but not the details—and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/7): Seems like the flaw is much worse than we thought.

EDITED TO ADD (8/13): Someone else discovered the vulnerability first.

Posted on July 29, 2008 at 6:01 AM71 Comments

Software Liabilities and Free Software

Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.

Don’t worry, they won’t be.

The key to understanding this is that this sort of contractual liability is part of a contract, and with free software—or free anything—there’s no contract. Free software wouldn’t fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law—that would be bad.)

There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.

The insurance industry is key to making this work. Luckily, they’re good at protecting people against liabilities. There’s no reason to think they won’t be able to do it here.

I’ve written more about liabilities and the insurance industry here.

Posted on July 28, 2008 at 2:42 PM51 Comments

Washington Post Comments on Terrorist Plots

From this article, published last April:

Batiste confided, somewhat fantastically, that he wanted to blow up the Sears Tower in Chicago, which would then fall into a nearby prison, freeing Muslim prisoners who would become the core of his Moorish army. With them, he would establish his own country.

Somewhat fantastically? What would the Washington Post consider to be truly fantastic? A plan involving Godzilla? Clearly they have some very high standards.

I’m sick of people taking these idiots seriously. This plot is beyond fantastic, it’s delusional.

Posted on July 25, 2008 at 6:48 AM50 Comments

Anti-Terrorism Stupidity at Yankee Stadium

They’re confiscating sunscreen at Yankee Stadium:

The team contends that sunscreen has long been on the list of stadium contraband, but there is no mention of it on the Yankee Web site.

Four weeks ago, Stadium officials decided that sunscreen of all sizes and varieties would not be permitted, a security supervisor told The Post before last night’s game.

“There have been a lot of complaints,” he said. “We tell them to apply once and then throw it out.”

For fans who bring babies or young children to cheer on the home team, the guard had suggested they “beg” to take the sunblock in.

Seeing the giant bag full of confiscated sunscreen Saturday, one steaming Yankee fan asked whether he could take one of the tubes and apply it before heading into the park.

“Absolutely not,” the guard told him. “What if you get a rash? You might sue the Yankees.”

Next, I suppose, is confiscating liquids at pools.

We’ve collectively lost our minds.

This story has a happy ending, though. A day after The New York Post published this story, Yankee Stadium reversed its ban. Now, if only the Post had that same effect on airport security.

Posted on July 24, 2008 at 6:50 AM46 Comments

Information Security and Liabilities

In my fourth column for the Guardian last Thursday, I talk about information security and liabilities:

Last summer, the House of Lords Science and Technology Committee issued a report on “Personal Internet Security.” I was invited to give testimony for that report, and one of my recommendations was that software vendors be held liable when they are at fault. Their final report included that recommendation. The government rejected the recommendations in that report last autumn, and last week the committee issued a report on their follow-up inquiry, which still recommends software liabilities.

Good for them.

I’m not implying that liabilities are easy, or that all the liability for security vulnerabilities should fall on the vendor. But the courts are good at partial liability. Any automobile liability suit has many potential responsible parties: the car, the driver, the road, the weather, possibly another driver and another car, and so on. Similarly, a computer failure has several parties who may be partially responsible: the software vendor, the computer vendor, the network vendor, the user, possibly another hacker, and so on. But we’re never going to get there until we start. Software liability is the market force that will incentivise companies to improve their software quality—and everyone’s security.

Posted on July 23, 2008 at 3:09 PM66 Comments

Speed Cameras Record Every Car

In this article about British speed cameras, and a trick to avoid them that does not work, is this sentence:

As vehicles pass between the entry and exit camera points their number plates are digitally recorded, whether speeding or not.

Without knowing more, I can guarantee that those records are kept forever.

EDITED TO ADD (7/25): As pointed out by Pete Darby in comments: Passenger moons speeding camera and gets his picture published even though the car was not speeding.

Police may take action against the man for public order offences and not wearing a seat belt.

Officers have the registration of the car, which was not breaking the speed limit, and intend to contact its owner.

It is understood the driver will not face prosecution as no driving offence was being committed.

How did they even know to look at the picture in the first place?

Posted on July 23, 2008 at 5:32 AM76 Comments

Washington DC Metro Farecard Hack

Clever:

Thieves took a legitimate paper Farecard with $40 in value, sliced the card’s magnetic strip into four lengthwise pieces, and then reattached one piece each to four separate defunct paper Farecards. The thieves then took the doctored Farecards to a Farecard machine and added fare, typically a nickel. By doing so, the doctored Farecard would go into the machine and a legitimate Farecard with the new value, $40.05, would come out.

My guess is that the thieves were caught not through some fancy technology, but because they had to monetize their attack. They sold Farecards on the street at half face value.

Posted on July 22, 2008 at 12:29 PM36 Comments

The Case of the Stolen BlackBerry and the Awesome Chinese Hacking Skills

A high-level British government employee had his BlackBerry stolen by Chinese intelligence:

The aide, a senior Downing Street adviser who was with the prime minister on a trip to China earlier this year, had his BlackBerry phone stolen after being picked up by a Chinese woman who had approached him in a Shanghai hotel disco.

The aide agreed to return to his hotel with the woman. He reported the BlackBerry missing the next morning.

That can’t look good on your annual employee review.

But it’s this part of the article that has me confused:

Experts say that even if the aide’s device did not contain anything top secret, it might enable a hostile intelligence service to hack into the Downing Street server, potentially gaining access to No 10’s e-mail traffic and text messages.

Um, what? I assume the IT department just turned off the guy’s password. Was this nonsense peddled to the press by the UK government, or is some “expert” trying to sell us something? The article doesn’t say.

EDITED TO ADD (7/22): The first commenter makes a good point, which I didn’t think of. The article says that it’s Chinese intelligence:

A senior official said yesterday that the incident had all the hallmarks of a suspected honeytrap by Chinese intelligence.

But Chinese intelligence would be far more likely to clone the BlackBerry and then return it. Much better information that way. This is much more likely to be petty theft.

EDITED TO ADD (7/23): The more I think about this story, the less sense it makes. If you’re a Chinese intelligence officer and you manage to get an aide to the British Prime Minister to have sex with one of your agents, you’re not going to immediately burn him by stealing his BlackBerry. That’s just stupid.

Posted on July 22, 2008 at 10:05 AM40 Comments

Scary Knife Makes for Great Newspaper Headlines

Who can not feel a little chill of fear after reading this: “Britain on alert for deadly new knife with exploding tip that freezes victims’ organs.”

Yes, it’s real. The knife is designed for people who need to drop large animals quickly: sharks, bears, etc.

I have no idea why Britain is on alert for it, though.

EDITED TO ADD (7/24): Knife crime is rising in the UK.

Posted on July 21, 2008 at 6:12 AM63 Comments

Cost/Benefit Analysis of Airline Security

This report, “Assessing the risks, costs and benefits of United States aviation security measures” by Mark Stewart and John Mueller, is excellent reading:

The United States Office of Management and Budget has recommended the use of cost-benefit assessment for all proposed federal regulations. Since 9/11 government agencies in Australia, United States, Canada, Europe and elsewhere have devoted much effort and expenditure to attempt to ensure that a 9/11 type attack involving hijacked aircraft is not repeated. This effort has come at considerable cost, running in excess of US$6 billion per year for the United States Transportation Security Administration (TSA) alone. In particular, significant expenditure has been dedicated to two aviation security measures aimed at preventing terrorists from hijacking and crashing an aircraft into buildings and other infrastructure: (i) Hardened cockpit doors and (ii) Federal Air Marshal Service. These two security measures cost the United States government and the airlines nearly $1 billion per year. This paper seeks to discover whether aviation security measures are cost-effective by considering their effectiveness, their cost and expected lives saved as a result of such expenditure. An assessment of the Federal Air Marshal Service suggests that the annual cost is $180 million per life saved. This is greatly in excess of the regulatory safety goal of $1-$10 million per life saved. As such, the air marshal program would seem to fail a cost-benefit analysis. In addition, the opportunity cost of these expenditures is considerable, and it is highly likely that far more lives would have been saved if the money had been invested instead in a wide range of more cost-effective risk mitigation programs. On the other hand, hardening of cockpit doors has an annual cost of only $800,000 per life saved, showing that this is a cost-effective security measure.

From the body:

Hardening cockpit doors has the highest risk reduction (16.67%) at lowest additional cost of $40 million. On the other hand, the Federal Air Marshal Service costs $900 million pa but reduces risk by only 1.67%. The Federal Air Marshal Service may be more cost-effective if it is able to show extra benefit over the cheaper measure of hardening cockpit doors. However, the Federal Air Marshal Service seems to have significantly less benefit which means that hardening cockpit doors is the more cost-effective measure.

Cost-benefit analysis is definitely the way to look at these security measures. It’s hard for people to do, because it requires putting a dollar value on a human life—something we can’t possibly do with our own. But as a society, it is something we do again and again: when we raise or lower speed limits, when we ban a certain pesticide, when we enact building codes. Insurance companies do it all the time. We do it implicitly, because we can’t talk about it explicitly. I think there is considerable value in talking about it.

(Note the table on page 5 of the report, which lists the cost per lives saved for a variety of safety and security measures.)

The final paper will eventually be published in the Journal of Transportation Security. I never even knew there was such a thing.

EDITED TO ADD (8/13): New York Times op-ed on the subject.

Posted on July 21, 2008 at 5:53 AM25 Comments

Midazolam as a Non-Lethal Weapon

Did you know that, in some jurisdictions, police can inject midazolam (better known as Versed) into suspects to subdue them?

“There is no research guideline. There is no validated protocol for this. There’s not even a clear set of indications for when this is to be used except when people are agitated. By saying that it’s done by the emergency medical personnel, they basically are trying to have it both ways. That is, they’re trying to use a medical protocol that is not validated, not for a police function, arrest and detention,” Miles said.

“The decision to administer Versed is based purely on a paramedic decision, not a police decision,” Slovis said.

It’s up to the officer to call an ambulance and determine if a person is in a condition called excited delirium.

“I don’t know if I would use the word diagnosing, but they are assessing the situation and saying, ‘This person is not acting rationally. This is something I’ve been trained to recognize, this seems like excited delirium.’ I don’t view delirium in the field as a police function. It is a medical emergency. We’re giving the drug Versed that’s routinely used in thousands of health care settings across the country in the field by trained paramedics. I view what we’re doing as the best possible medical practice to a medical emergency,” Slovis said.

The biggest side effect is amnesia, which makes it harder for any defendant to defend himself in court.

Posted on July 18, 2008 at 11:28 AM73 Comments

TrueCrypt's Deniable File System

Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.

ABSTRACT: We examine the security requirements for creating a Deniable File System (DFS), and the efficacy with which the TrueCrypt disk-encryption software meets those requirements. We find that the Windows Vista operating system itself, Microsoft Word, and Google Desktop all compromise the deniability of a TrueCrypt DFS. While staged in the context of TrueCrypt, our research highlights several fundamental challenges to the creation and use of any DFS: even when the file system may be deniable in the pure, mathematical sense, we find that the environment surrounding that file system can undermine its deniability, as well as its contents. Finally, we suggest approaches for overcoming these challenges on modern operating systems like Windows.

The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.

There are several threat models against which a DFS could potentially be secure:

  • One-Time Access. The attacker has a single snapshot of the disk image. An example would be when the secret police seize Alice’s computer.
  • Intermittent Access. The attacker has several snapshots of the disk image, taken at different times. An example would be border guards who make a copy of Alice’s hard drive every time she enters or leaves the country.
  • Regular Access. The attacker has many snapshots of the disk image, taken in short intervals. An example would be if the secret police break into Alice’s apartment every day when she is away, and make a copy of the disk each time.

Since we wrote our paper, TrueCrypt released version 6.0 of its software, which claims to have addressed many of the issues we’ve uncovered. In the paper, we said:

We analyzed the most current version of TrueCrypt available at the writing of the paper, version 5.1a. We shared a draft of our paper with the TrueCrypt development team in May 2008. TrueCrypt version 6.0 was released in July 2008. We have not analyzed version 6.0, but observe that TrueCrypt v6.0 does take new steps to improve TrueCrypt’s deniability properties (e.g., via the creation of deniable operating systems, which we also recommend in Section 5). We suggest that the breadth of our results for TrueCrypt v5.1a highlight the challenges to creating deniable file systems. Given these potential challenges, we encourage the users not to blindly trust the deniability of such systems. Rather, we encourage further research evaluating the deniability of such systems, as well as research on new yet light-weight methods for improving deniability.

So we cannot break the deniability feature in TrueCrypt 6.0. But, honestly, I wouldn’t trust it.

There have been two news articles (and a Slashdot thread) about the paper.

One talks about a generalization to encrypted partitions. If you don’t encrypt the entire drive, there is the possibility—and it seems very probable—that information about the encrypted partition will leak onto the unencrypted rest of the drive. Whole disk encryption is the smartest option.

Our paper will be presented at the 3rd USENIX Workshop on Hot Topics in Security (HotSec ’08). I’ve written about deniability before.

Posted on July 18, 2008 at 6:56 AM73 Comments

Locksmiths Hate Computer Geeks who Learn Lockpicking

They do:

Hobby groups throughout North America have cracked supposedly unbeatable locks. Mr. Nekrep, who maintains a personal collection of more than 300 locks, has demonstrated online how to open a Kensington laptop lock using Scotch tape and a Post-it note. Another Lockpicking101.com member discovered the well-publicized method of opening Kryptonite bike locks with a ball-point pen, a revelation that prompted Kryptonite to replace all of its compromised locks.

Other lock manufacturers haven’t admitted their flaws so readily. Marc Tobias, a lawyer and security expert, recently shook up the lock-picking community by publishing a detailed analysis of how to crack the uncrackable: Medeco locks.

“We’ve figured out how to break them in as little as 30 seconds,” he said. “[Medeco] won’t admit it, though. They still believe in security through obscurity. But by not fixing the problems we identify, lock-makers are putting the public at risk. They have a duty to disclose vulnerabilities. If they don’t, we will.”

Posted on July 17, 2008 at 1:30 PM

Homeland Security Cost-Benefit Analysis

This is an excellent paper by Ohio State political science professor John Mueller. Titled “The Quixotic Quest for Invulnerability: Assessing the Costs, Benefits, and Probabilities of Protecting the Homeland,” it lays out some common send premises and policy implications.

The premises:

1. The number of potential terrorist targets is essentially infinite.

2. The probability that any individual target will be attacked is essentially zero.

3. If one potential target happens to enjoy a degree of protection, the agile terrorist usually can readily move on to another one.

4. Most targets are “vulnerable” in that it is not very difficult to damage them, but invulnerable in that they can be rebuilt in fairly short order and at tolerable expense.

5. It is essentially impossible to make a very wide variety of potential terrorist targets invulnerable except by completely closing them down.

The policy implications:

1. Any protective policy should be compared to a “null case”: do nothing, and use the money saved to rebuild and to compensate any victims.

2. Abandon any effort to imagine a terrorist target list.

3. Consider negative effects of protection measures: not only direct cost, but inconvenience, enhancement of fear, negative economic impacts, reduction of liberties.

4. Consider the opportunity costs, the tradeoffs, of protection measures.

Here’s the abstract:

This paper attempts to set out some general parameters for coming to grips with a central homeland security concern: the effort to make potential targets invulnerable, or at least notably less vulnerable, to terrorist attack. It argues that protection makes sense only when protection is feasible for an entire class of potential targets and when the destruction of something in that target set would have quite large physical, economic, psychological, and/or political consequences. There are a very large number of potential targets where protection is essentially a waste of resources and a much more limited one where it may be effective.

The whole paper is worth reading.

Posted on July 17, 2008 at 6:43 AM61 Comments

Disgruntled Employee Holds San Francisco Computer Network Hostage

Trusted insiders can do a lot of damage:

Childs created a password that granted him exclusive access to the system, authorities said. He initially gave pass codes to police, but they didn’t work. When pressed, Childs refused to divulge the real code even when threatened with arrest, they said.

He was taken into custody Sunday. City officials said late Monday that they had made some headway into cracking his pass codes and regaining access to the system.

Childs has worked for the city for about five years. One official with knowledge of the case said he had been disciplined on the job in recent months for poor performance and that his supervisors had tried to fire him.

“They weren’t able to do it – this was kind of his insurance policy,” said the official, speaking on condition of anonymity because the attempted firing was a personnel matter.

Authorities say Childs began tampering with the computer system June 20. The damage is still being assessed, but authorities say undoing his denial of access to other system administrators could cost millions of dollars.

EDITED TO ADD (8/10): According to another article, “officials say the network so far has been humming along just fine without admin access by the city.” So it’s not a complete shutdown as much as an admin lock out.

EDITED TO ADD (8/13): This is getting weirder. Terry Childs gave the right passwords, but only to the mayor personally.

Posted on July 16, 2008 at 11:43 AM59 Comments

Congratulations to our Millionth Terrorist!

The U.S terrorist watch list has hit one million names. I sure hope we’re giving our millionth terrorist a prize of some sort.

Who knew that a million people are terrorists. Why, there are only twice as many burglars in the U.S. And fifteen times more terrorists than arsonists.

Is this idiotic, or what?

Some people are saying fix it, but there seems to be no motivation to do so. I’m sure the career incentives aren’t aligned that way. You probably get promoted by putting people on the list. But taking someone off the list…if you’re wrong, no matter how remote that possibility is, you can probably lose your career. This is why in civilized societies we have a judicial system, to be an impartial arbiter between law enforcement and the accused. But that system doesn’t apply here.

Kafka would be proud.

EDITED TO ADD (7/16): More information:

There are only 400,000 on it, and 95 percent are not U.S. “persons.” (Persons = citizens plus others with a legal right to be in the U.S.)

The “million” number refers to records. The difference is a result of listing several different aliases or spellings for a suspected terrorist.

“That is not the same as 1 million names or 1 million individuals,” Mr. Kolton said. “It’s a little bit frustrating because I feel like they are getting away with muddying up the terms.”

Not that 400,000 terrorists is any less absurd.

Screening and law enforcement agencies encountered the actual people on the watch list (not false matches) more than 53,000 times from December 2003 to May 2007, according to a Government Accountability Office report last fall.

Okay, so I have a question. How many of those 53,000 were arrested? Of those who were not, why not? How many have we taken off the list after we’ve investigated them?

EDITED TO ADD (7/17): Bob Blakely runs the numbers.

EDITED TO ADD (8/13): The Daily Show’s Jon Stewart on the subject.

Posted on July 16, 2008 at 6:08 AM74 Comments

Using a File Erasure Tool Considered Suspicious

By a California court:

The designer, Carter Bryant, has been accused by Mattel of using Evidence Eliminator on his laptop computer just two days before investigators were due to copy its hard drive.

Carter hasn’t denied that the program was run on his computer, but he said it wasn’t to destroy evidence. He said he had legitimate reasons to use the software.

[…]

But the wiper programs don’t ensure a clean getaway. They leave behind a kind of digital calling card.

“Not only do these programs leave a trace that they were used, they each have a distinctive fingerprint,” Kessler said. “Evidence Eliminator leaves one that’s different from Window Washer, and so on.”

It’s the kind of information that can be brought up in court. And if the digital calling card was left by Evidence Eliminator, it could raise some eyebrows, even if the wiper was used for the most innocent of reasons.

I have often recommended that people use file erasure tools regularly, especially when crossing international borders with their computers. Now we have one more reason to use them regularly: plausible deniability if you’re accused of erasing data to keep it from the police.

Posted on July 15, 2008 at 1:36 PM67 Comments

Man-in-the-Middle Attacks

Last week’s dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.

In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they’re talking to each other, and the attacker can delete or modify the communications at will.

The Wall Street Journal reported how this gambit played out in Colombia:

“The plan had a chance of working because, for months, in an operation one army officer likened to a ‘broken telephone,’ military intelligence had been able to convince Ms. Betancourt’s captor, Gerardo Aguilar, a guerrilla known as ‘Cesar,’ that he was communicating with his top bosses in the guerrillas’ seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence.”

This ploy worked because Cesar and his guerrilla bosses didn’t know one another well. They didn’t recognize one anothers’ voices, and didn’t have a friendship or shared history that could have tipped them off about the ruse. Man-in-the-middle is defeated by context, and the FARC guerrillas didn’t have any.

And that’s why man-in-the-middle, abbreviated MITM in the computer-security community, is such a problem online: Internet communication is often stripped of any context. There’s no way to recognize someone’s face. There’s no way to recognize someone’s voice. When you receive an e-mail purporting to come from a person or organization, you have no idea who actually sent it. When you visit a website, you have no idea if you’re really visiting that website. We all like to pretend that we know who we’re communicating with—and for the most part, of course, there isn’t any attacker inserting himself into our communications—but in reality, we don’t. And there are lots of hacker tools that exploit this unjustified trust, and implement MITM attacks.

Even with context, it’s still possible for MITM to fool both sides—because electronic communications are often intermittent. Imagine that one of the FARC guerrillas became suspicious about who he was talking to. So he asks a question about their shared history as a test: “What did we have for dinner that time last year?” or something like that. On the telephone, the attacker wouldn’t be able to answer quickly, so his ruse would be discovered. But e-mail conversation isn’t synchronous. The attacker could simply pass that question through to the other end of the communications, and when he got the answer back, he would be able to reply.

This is the way MITM attacks work against web-based financial systems. A bank demands authentication from the user: a password, a one-time code from a token or whatever. The attacker sitting in the middle receives the request from the bank and passes it to the user. The user responds to the attacker, who passes that response to the bank. Now the bank assumes it is talking to the legitimate user, and the attacker is free to send transactions directly to the bank. This kind of attack completely bypasses any two-factor authentication mechanisms, and is becoming a more popular identity-theft tactic.

There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.

The NSA-designed STU-III and STE secure telephones solve the MITM problem by embedding the identity of each phone together with its key. (The NSA creates all keys and is trusted by everyone, so this works.) When two phones talk to each other securely, they exchange keys and display the other phone’s identity on a screen. Because the phone is in a secure location, the user now knows who he is talking to, and if the phone displays another organization—as it would if there were a MITM attack in progress—he should hang up.

Zfone, a secure VoIP system, protects against MITM attacks with a short authentication string. After two Zfone terminals exchange keys, both computers display a four-character string. The users are supposed to manually verify that both strings are the same—”my screen says 5C19; what does yours say?”—to ensure that the phones are communicating directly with each other and not with an MITM. The AT&T TSD-3600 worked similarly.

This sort of protection is embedded in SSL, although no one uses it. As it is normally used, SSL provides an encrypted communications link to whoever is at the other end: bank and phishing site alike. And the better phishing sites create valid SSL connections, so as to more effectively fool users. But if the user wanted to, he could manually check the SSL certificate to see if it was issued to “National Bank of Trustworthiness” or “Two Guys With a Computer in Nigeria.”

No one does, though, because you have to both remember and be willing to do the work. (The browsers could make this easier if they wanted to, but they don’t seem to want to.) In the real world, you can easily tell a branch of your bank from a money changer on a street corner. But on the internet, a phishing site can be easily made to look like your bank’s legitimate website. Any method of telling the two apart takes work. And that’s the first step to fooling you with a MITM attack.

Man-in-the-middle isn’t new, and it doesn’t have to be technological. But the internet makes the attacks easier and more powerful, and that’s not going to change anytime soon.

This essay originally appeared on Wired.com.

Posted on July 15, 2008 at 6:47 AM45 Comments

Daniel Solove on the New FISA Law

From his blog:

Future presidents can learn a lot from all this—do exactly what the Bush Administration did! If the law holds you back, don’t first go to Congress and try to work something out. Secretly violate that law, and then when you get caught, staunchly demand that Congress change the law to your liking and then immunize any company that might have illegally cooperated with you. That’s the lesson. You spit in Congress’s face, and they’ll give you what you want.

The past eight years have witnessed a dramatic expansion of Executive Branch power, with a rather anemic push-back from the Legislative and Judicial Branches. We have extensive surveillance on a mass scale by agencies with hardly any public scrutiny, operating mostly in secret, with very limited judicial oversight, and also with very minimal legislative oversight. Most citizens know little about what is going on, and it will be difficult for them to find out, since everything is kept so secret. Secrecy and accountability rarely go well together. The telecomm lawsuits were at least one way that citizens could demand some information and accountability, but now that avenue appears to be shut down significantly with the retroactive immunity grant. There appear to be fewer ways for the individual citizen or citizen advocacy groups to ensure accountability of the government in the context of national security.

That’s the direction we’re heading in—more surveillance, more systemic government monitoring and data mining, and minimal oversight and accountability—with most of the oversight being very general, not particularly rigorous, and nearly always secret—and with the public being almost completely shut out of the process. But don’t worry, you shouldn’t get too upset about all this. You probably won’t know much about it. They’ll keep the dirty details from you, because what you don’t know can’t hurt you.

Posted on July 14, 2008 at 12:08 PM30 Comments

Chinese Cyber Attacks

The popular media conception is that there is a coordinated attempt by the Chinese government to hack into U.S. computers—military, government corporate—and steal secrets. The truth is a lot more complicated.

There certainly is a lot of hacking coming out of China. Any company that does security monitoring sees it all the time.

These hacker groups seem not to be working for the Chinese government. They don’t seem to be coordinated by the Chinese military. They’re basically young, male, patriotic Chinese citizens, trying to demonstrate that they’re just as good as everyone else. As well as the American networks the media likes to talk about, their targets also include pro-Tibet, pro-Taiwan, Falun Gong and pro-Uyghur sites.

The hackers are in this for two reasons: fame and glory, and an attempt to make a living. The fame and glory comes from their nationalistic goals. Some of these hackers are heroes in China. They’re upholding the country’s honor against both anti-Chinese forces like the pro-Tibet movement and larger forces like the United States.

And the money comes from several sources. The groups sell owned computers, malware services, and data they steal on the black market. They sell hacker tools and videos to others wanting to play. They even sell T-shirts, hats and other merchandise on their Web sites.

This is not to say that the Chinese military ignores the hacker groups within their country. Certainly the Chinese government knows the leaders of the hacker movement and chooses to look the other way. They probably buy stolen intelligence from these hackers. They probably recruit for their own organizations from this self-selecting pool of experienced hacking experts. They certainly learn from the hackers.

And some of the hackers are good. Over the years, they have become more sophisticated in both tools and techniques. They’re stealthy. They do good network reconnaissance. My guess is what the Pentagon thinks is the problem is only a small percentage of the actual problem.

And they discover their own vulnerabilities. Earlier this year, one security company noticed a unique attack against a pro-Tibet organization. That same attack was also used two weeks earlier against a large multinational defense contractor.

They also hoard vulnerabilities. During the 1999 conflict over the two-states theory conflict, in a heated exchange with a group of Taiwanese hackers, one Chinese group threatened to unleash multiple stockpiled worms at once. There was no reason to disbelieve this threat.

If anything, the fact that these groups aren’t being run by the Chinese government makes the problem worse. Without central political coordination, they’re likely to take more risks, do more stupid things and generally ignore the political fallout of their actions.

In this regard, they’re more like a non-state actor.

So while I’m perfectly happy that the U.S. government is using the threat of Chinese hacking as an impetus to get their own cybersecurity in order, and I hope they succeed, I also hope that the U.S. government recognizes that these groups are not acting under the direction of the Chinese military and doesn’t treat their actions as officially approved by the Chinese government.

This essay originally appeared on the Discovery Channel website.

EDITED TO ADD (7/18): A slightly longer version of this essay appeared in Information Security magazine as part of a point/counterpoint with Marcus Ranum. His half is here.

Posted on July 14, 2008 at 7:08 AM34 Comments

Good Essay on TSA Stupidity

From Salon:

“You ain’t takin’ this through,” she says. “No knives. You can’t bring a knife through here.”

It takes a moment for me to realize that she’s serious. “I’m … but … it’s …”

“Sorry.” She throws it into a bin and starts to walk away.

“Wait a minute,” I say. “That’s airline silverware.”

“Don’t matter what it is. You can’t bring knives through here.”

“Ma’am, that’s an airline knife. It’s the knife they give you on the plane.”

Posted on July 11, 2008 at 10:34 AM71 Comments

Exploiting the War on Photography

Petty thieves are exploiting the war on photography in Genoa:

As they were walking around, Jeff saw some interesting looking produce and pulled out his Canon G-9 Point-and-Shoot and took a few pictures. Within a few minutes a man came up dressed in plain clothes, flashed a badge, and told him he couldn’t take photos in the store. My brother said “no problem” (after all, it’s a private store, right?), but then the guy demanded my brother’s memory card.

My brother gave him that “Are you outta your mind” look and said, “No way!” Can you guess what happened next? The guy simply shrugged his shoulders and walked away.

My brother saw him in the store a little later, and the guy had a bag and was shopping. My brother made eye contact with him, and the guy turned away as though he didn’t want Jeff looking at him. Jeff feels like this wasn’t “official store security,” but instead some guy collecting (and then reselling) memory cards from unsuspecting tourists (many of whom might have just surrendered that card immediately).

Posted on July 10, 2008 at 6:54 AM47 Comments

The Continued Cheapening of the Word "Terrorism"

Now labor strikes are terrorism:

The Rail Tram and Bus Union (RTBU) said today it was planning a 24-hour strike by rail workers on July 17, the busiest day of the Catholic event.

It is the day Pope Benedict XVI will make his way through the streets of Sydney during the afternoon peak.

The NSW Government will take the matter to the Australian Industrial Relations Commission (AIRC) tomorrow.

Mr Iemma said his Government would not cave in to the RTBU.

“The Government will not be blackmailed into giving them what they want as a result of these industrial terror tactics,” he said.

That’s Morris Iemma, the Premier of New South Wales.

Terrorism is a heinous crime, and a serious international problem. It’s not a catchall word to describe anything you don’t like or don’t agree with, or even anything that adversely affects a large number of people. By using the word more broadly than its actual meaning, we muddy the already complicated popular conceptions of the issue. The word “terrorism” has a specific meaning, and we shouldn’t debase it.

Posted on July 8, 2008 at 6:10 AM74 Comments

Sunglasses that Hide your Face from Cameras

Clever. Article and video:

They work by mounting two small infrared lights on the front. The wearer is completely inconspicuous to the human eye, but cameras only see a big white blur where your face should be.

Building them is a snap: just take a pair of sunglasses, attach two small but powerful IR LEDS to two pairs of wires, one wire per LED. Then attach the LEDs to the glasses; the video suggests making a hole in the rim of the glasses to embed the LEDs. Glue or otherwise affix the wires to the temples of the glasses. At the end of the temples, attach lithium batteries. They should make contact with the black wire, but the red wires should be left suspended near the batteries without making contact. When you put them on the red wire makes contact, turning the lights on. It’s functional, but we’re thinking that installing an on/off switch would be more elegant and it would allow you to wear them without depleting the batteries.

EDITED TO ADD (7/8): Doubts have been raised about whether this works as advertised against paparazzi cameras. I can’t tell for sure one way or the other.

Posted on July 7, 2008 at 1:54 PM47 Comments

Automatic Profiling Is Useless

No surprise:

Automated passenger profiling is rubbish, the Home Office has conceded in an amusing—and we presume inadvertent—blurt. “Attempts at automated profiling have been used in trial operations [at UK ports of entry] and has proved [sic] that the systems and technology available are of limited use,” says home secretary Jacqui Smith in her response to Lord Carlile’s latest terror legislation review.

The U.S. wants to do it anyway:

The Justice Department is considering letting the FBI investigate Americans without any evidence of wrongdoing, relying instead on a terrorist profile that could single out Muslims, Arabs or other racial or ethnic groups.

I’ve written about profiling before.

Posted on July 7, 2008 at 1:37 PM28 Comments

Encrypting Disks

The UK is learning:

The Scottish Ambulance Service confirmed today that a package containing contact information from its Paisley Emergency Medical Dispatch Centre (EMDC) has been lost by the courier, TNT, while in transit to one of its IT suppliers.

The portable data disk contained a copy of records of 894,629 calls to the ambulance service’s Paisley EMDC since February 2006. It was fully encrypted and password protected and includes the addresses of incidents, some phone numbers and some patient names. Given the security measures and the complex structure of the database it would be extremely difficult to gain access to any meaningful information.

News story here.

That’s what you want to do. There is no problem if encrypted disks are lost. You can mail them directly to your worst enemy and there’s no problem. Well, assuming you’ve implemented the encryption properly and chosen a good key.

This is much better than what the HM Revenue & Customs office did in November.

I wrote about disk and laptop encryption previously.

Posted on July 4, 2008 at 1:10 PM33 Comments

Hundreds of Thousands of Laptops Lost at U.S. Airports Annually

This is a weird statistic:

Some of the largest and medium-sized U.S. airports report close to 637,000 laptops lost each year, according to the Ponemon Institute survey released Monday. Laptops are most commonly lost at security checkpoints, according to the survey.

Close to 10,278 laptops are reported lost every week at 36 of the largest U.S. airports, and 65 percent of those laptops are not reclaimed, the survey said. Around 2,000 laptops are recorded lost at the medium-sized airports, and 69 percent are not reclaimed.

Travelers seem to lack confidence that they will recover lost laptops. About 77 percent of people surveyed said they had no hope of recovering a lost laptop at the airport, with 16 percent saying they wouldn’t do anything if they lost their laptop during business travel. About 53 percent said that laptops contain confidential company information, with 65 percent taking no steps to protect the information.

I don’t know how to generalize that to a total number of lost laptops in the U.S.; let’s call it 750,000. At $1,000 per laptop—a very conservative estimate—that’s $750 million in lost laptops annually. Most are lost at security checkpoints, and I’m sure the numbers went up considerably since those checkpoints got more annoying after 9/11.

There aren’t a lot of real numbers about the costs of increased airport security. We pay in time, in anxiety, in inconvenience. But we also pay in goods. TSA employees steal out of suitcases. And opportunists steal hundreds of millions of dollars of laptops annually.

EDITED TO ADD (7/14): Seems like this is not a story.

Posted on July 4, 2008 at 8:20 AM50 Comments

Random Stupidity in the Name of Terrorism

An air traveler in Canada is first told by an airline employee that it is “illegal” to say certain words, and then that if she raised a fuss she would be falsely accused:

When we boarded a little later, I asked for the ninny’s name. He refused and hissed, “If you make a scene, I’ll call the pilot and you won’t be flying tonight.”

More on the British war on photographers.

A British man is forced to give up his hobby of photographing buses due to harrassment.

The credit controller, from Gloucester, says he now suffers “appalling” abuse from the authorities and public who doubt his motives.

The bus-spotter, officially known as an omnibologist, said: “Since the 9/11 attacks there has been a crackdown.

“The past two years have absolutely been the worst. I have had the most appalling abuse from the public, drivers and police over-exercising their authority.

Mr McCaffery, who is married, added: “We just want to enjoy our hobby without harassment.

“I can deal with the fact someone might think I’m a terrorist, but when they start saying you’re a paedophile it really hurts.”

Is everything illegal and damaging now terrorism?

Israeli authorities are investigating why a Palestinian resident of Jerusalem rammed his bulldozer into several cars and buses Wednesday, killing three people before Israeli police shot him dead.

Israeli authorities are labeling it a terrorist attack, although they say there is no clear motive and the man—a construction worker—acted alone. It is not known if he had links to any terrorist organization.

New Jersey public school locked down after someone saw a ninja:

Turns out the ninja was actually a camp counselor dressed in black karate garb and carrying a plastic sword.

Police tell the Asbury Park Press the man was late to a costume-themed day at a nearby middle school.

And finally, not terrorism-related but a fine newspaper headline: “Giraffe helps camels, zebras escape from circus“:

Amsterdam police say 15 camels, two zebras and an undetermined number of llamas and potbellied swine briefly escaped from a traveling Dutch circus after a giraffe kicked a hole in their cage.

Are llamas really that hard to count?

EDITED TO ADD (7/2): Errors fixed.

Posted on July 3, 2008 at 12:57 PM77 Comments

Browser Insecurity

This excellent paper measures insecurity in the global population of browsers, using Google’s web server logs. Why is this important? Because browsers are an increasingly popular attack vector.

The results aren’t good.

…at least 45.2%, or 637 million users, were not using the most secure Web browser version on any working day from January 2007 to June 2008. These browsers are an easy target for drive-by download attacks as they are potentially vulnerable to known exploits.

That number breaks down as 577 million users of Internet Explorer, 38 million of Firefox, 17 million of Safari, and 5 million of Opera. Lots more detail in the paper, including some ideas for technical solutions.

EDITED TO ADD (7/2): More commentary.

Posted on July 3, 2008 at 7:02 AM20 Comments

Dan Wallach on Electronic Voting Machines

It’s been a while since I’ve written about electronic voting machines, but Dan Wallach has an excellent blog post about the current line of argument from the voting machine companies and why it’s wrong.

Unsurprisingly, the vendors and their trade organization are spinning the results of these studies, as best they can, in an attempt to downplay their significance. Hopefully, legislators and election administrators are smart enough to grasp the vendors’ behavior for what it actually is and take appropriate steps to bolster our election integrity.

Until then, the bottom line is that many jurisdictions in Texas and elsewhere in the country will be using e-voting equipment this November with known security vulnerabilities, and the procedures and controls they are using will not be sufficient to either prevent or detect sophisticated attacks on their e-voting equipment. While there are procedures with the capability to detect many of these attacks (e.g., post-election auditing of voter-verified paper records), Texas has not certified such equipment for use in the state. Texas’s DREs are simply vulnerable to and undefended against attacks.

Posted on July 2, 2008 at 6:15 AM46 Comments

Kill Switches and Remote Control

It used to be that just the entertainment industries wanted to control your computers—and televisions and iPods and everything else—to ensure that you didn’t violate any copyright rules. But now everyone else wants to get their hooks into your gear.

OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed. The Pentagon wants a kill switch installed on airplanes, and is worried about potential enemies installing kill switches on their own equipment.

Microsoft is doing some of the most creative thinking along these lines, with something it’s calling “Digital Manners Policies.” According to its patent application, DMP-enabled devices would accept broadcast “orders” limiting their capabilities. Cellphones could be remotely set to vibrate mode in restaurants and concert halls, and be turned off on airplanes and in hospitals. Cameras could be prohibited from taking pictures in locker rooms and museums, and recording equipment could be disabled in theaters. Professors finally could prevent students from texting one another during class.

The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices—computers, phones, PDAs, cameras, recorders—with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.

Once we go down this path—giving one device authority over other devices—the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?

How do we prevent this from being abused? Can a burglar, for example, enforce a “no photography” rule and prevent security cameras from working? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get “superuser” devices that cannot be limited, and do they get “supercontroller” devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?

It’s comparatively easy to make this work in closed specialized systems—OnStar, airplane avionics, military hardware—but much more difficult in open-ended systems. If you think Microsoft’s vision could possibly be securely designed, all you have to do is look at the dismal effectiveness of the various copy-protection and digital-rights-management systems we’ve seen over the years. That’s a similar capabilities-enforcement mechanism, albeit simpler than these more general systems.

And that’s the key to understanding this system. Don’t be fooled by the scare stories of wireless devices on airplanes and in hospitals, or visions of a world where no one is yammering loudly on their cellphones in posh restaurants. This is really about media companies wanting to exert their control further over your electronics. They not only want to prevent you from surreptitiously recording movies and concerts, they want your new television to enforce good “manners” on your computer, and not allow it to record any programs. They want your iPod to politely refuse to copy music to a computer other than your own. They want to enforce their legislated definition of manners: to control what you do and when you do it, and to charge you repeatedly for the privilege whenever possible.

“Digital Manners Policies” is a marketing term. Let’s call this what it really is: Selective Device Jamming. It’s not polite, it’s dangerous. It won’t make anyone more secure—or more polite.

This essay originally appeared in Wired.com.

Posted on July 1, 2008 at 6:48 AM65 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.