Blog: January 2009 Archives

Friday Squid Blogging: Safe Quick Undercarriage Immobilization Device (SQUID)

New security device:

But what if an officer could lay down a road trap in seconds, then activate it from a nearby hiding place? What if—like sea monsters of ancient lore—the trap could reach up from below to ensnare anything from a MINI Cooper to a Ford Expedition? What if this trap were as small as a spare tire, as light as a tire jack, and cost under a grand?

Thanks to imaginative design and engineering funded by the Small Business Innovation Research (SBIR) Office of the U.S. Department of Homeland Security’s Science and Technology Directorate (S&T), such a trap may be stopping brigands by 2010. It’s called the Safe Quick Undercarriage Immobilization Device, or SQUID. When closed, the current prototype resembles a cheese wheel full of holes. When open (deployed), it becomes a mass of tentacles entangling the axles. By stopping the axles instead of the wheels, SQUID may change how fleeing drivers are, quite literally, caught.

Of course, there’s a lot separating a cool idea from reality. But it is a cool idea.

Posted on January 30, 2009 at 4:34 PM36 Comments

Jeffrey Rosen on the Department of Homeland Security

Excellent article:

The same elements of psychology lead people to exaggerate the likelihood of terrorist attacks: Images of terrifying but highly unusual catastrophes on television—such as the World Trade Center collapsing—are far more memorable than images of more mundane and more prevalent threats, like dying in car crashes. Psychologists call this the “availability heuristic,” in which people estimate the probability of something occurring based on how easy it is to bring examples of the event to mind.

As a result of this psychological bias, large numbers of Americans have overestimated the probability of future terrorist strikes: In a poll conducted a few weeks after September 11, respondents saw a 20 percent chance that they would be personally harmed in a terrorist attack within the next year and nearly a 50 percent chance that the average American would be harmed. Those alarmist predictions, thankfully, proved to be wrong; in fact, since September 11, international terrorism has killed only a few hundred people per year around the globe, as John Mueller points out in Overblown. At the current rates, Mueller argues, the lifetime probability of any resident of the globe being killed by terrorism is just one in 80,000.

This public anxiety is the central reason for both the creation of DHS and its subsequent emphasis on showy prevention measures, which Schneier calls a form of “security theater.” But that raises a question: Even if DHS doesn’t actually make us safer, could its existence still be justified if reducing the public’s fears leads to tangible economic benefits? “If the public’s response is based on irrational, emotional fears, it may be reasonable for the government to do things that make us feel better, even if those don’t make us safer in a rational sense, because if they feel better, people will fly on planes and behave in a way that’s good for the economy,” Tierney told me. But the psychological impact of DHS still has to be subject to cost-benefit analysis: On balance, is the government actually calming people rather than making them more nervous? Tierney argues convincingly that the same public fears that encourage government officials to spend money on flashy preventive measures also encourage them to exaggerate the terrorist threat. “It’s very difficult for a government official to come out and say anything like, ‘Let’s put this threat in perspective,'” he told me. “If they were to do so, and there isn’t a terrorist attack, they get no credit; and, if there is, that’s the end of their career.” Of course, no government official feels this pressure more acutely than the head of homeland security. And so, even as DHS seeks to tamp down public fears with expensive and often wasteful preventive measures, it may also be encouraging those fears—which, in turn, creates ever more public demand for spending on prevention.

Michael Chertoff’s public comments about terrorism embody this dilemma: Despite his laudable efforts to speak soberly and responsibly about terrorism—and to argue that there are many kinds of attacks we simply can’t prevent—the incentives associated with his job have led him at times to increase, rather than diminish, public anxiety. Last March he declared that, “if we don’t recognize the struggle we are in as a significant existential struggle, then it is going to be very hard to maintain the focus.” If nuclear attacks aren’t likely and smaller events aren’t existential threats, I asked, why did he say the war on terrorism is a “significant existential struggle”? “To me, existential is a threat that shakes the core of a society’s confidence and causes a significant and long-lasting line of damage to the country,” he replied. But it would take a series of weekly Virginia Tech-style shootings or London-style subway bombings to shake the core of American confidence; and Al Qaeda hasn’t come close to mustering that frequency of low-level attacks in any Western democracy since September 11. “Terrorism kills a certain number of people, and so do forest fires,” Mueller told me. “If terrorism is merely killing certain numbers of people, then it’s not an existential threat, and money is better spent on smoke alarms or forcing people to wear seat belts instead of chasing terrorists.”

Posted on January 30, 2009 at 11:38 AM25 Comments

Interview with an Adware Developer

Fascinating:

I should probably first speak about how adware works. Most adware targets Internet Explorer (IE) users because obviously they’re the biggest share of the market. In addition, they tend to be the less-savvy chunk of the market. If you’re using IE, then either you don’t care or you don’t know about all the vulnerabilities that IE has.

IE has a mechanism called a Browser Helper Object (BHO) which is basically a gob of executable code that gets informed of web requests as they’re going. It runs in the actual browser process, which means it can do anything the browser can do—which means basically anything. We would have a Browser Helper Object that actually served the ads, and then we made it so that you had to kill all the instances of the browser to be able to delete the thing. That’s a little bit of persistence right there.

If you also have an installer, a little executable, you can make a Registry entry and every time this thing reboots, the installer will check to make sure the BHO is there. If it is, great. If it isn’t, then it will install it. That’s fine until somebody goes and deletes the executable.

The next thing that Direct Revenue did—actually I should say what I did, because I was pretty heavily involved in this—was make a poller which continuously polls about every 10 seconds or so to see if the BHO was there and alive. If it was, great. If it wasn’t, [ the poller would ] install it. To make sure the poller was less likely to be detected, we developed this algorithm (a really trivial one) for making a random-looking filename that was consistent per machine but was not easy to guess. I think it was the first 6 or 8 characters of the DES-encoded MAC address. You take the MAC address, encode it with DES, take the first six characters and that was it. That was pretty good, except the file itself would be the same binary. If you md5-summed the file it would always be the same everywhere, and it was always in the same location.

Next we made a function shuffler, which would go into an executable, take the functions and randomly shuffle them. Once you do that, then of course the signature’s all messed up. [ We also shuffled ] a lot of the pointers within each actual function. It completely changed the shape of the executable.

We then made a bootstrapper, which was a tiny tiny piece of code written in Assembler which would decrypt the executable in memory, and then just run it. At the same time, we also made a virtual process executable. I’ve never heard of anybody else doing this before. Windows has this thing called Create Remote Thread. Basically, the semantics of Create Remote Thread are: You’re a process, I’m a different process. I call you and say “Hey! I have this bit of code. I’d really like it if you’d run this.” You’d say, “Sure,” because you’re a Windows process—you’re all hippie-like and free love. Windows processes, by the way, are insanely promiscuous. So! We would call a bunch of processes, hand them all a gob of code, and they would all run it. Each process would all know about two of the other ones. This allowed them to set up a ring…mutual support, right?

So we’ve progressed now from having just a Registry key entry, to having an executable, to having a randomly-named executable, to having an executable which is shuffled around a little bit on each machine, to one that’s encrypted—really more just obfuscated—to an executable that doesn’t even run as an executable. It runs merely as a series of threads. Now, those threads can communicate with one another, they would check to make sure that the BHO was there and up, and that the whatever other software we had was also up.

There was one further step that we were going to take but didn’t end up doing, and that is we were going to get rid of threads entirely, and just use interrupt handlers. It turns out that in Windows, you can get access to the interrupt handler pretty easily. In fact, you can register with the OS a chunk of code to handle a given interrupt. Then all you have to do is arrange for an interrupt to happen, and every time that interrupt happens, you wake up, do your stuff and go away. We never got to actually do that, but it was something we were thinking we’d do.

EDITED TO ADD (1/30): Good commentary on the interview, showing how it whitewashes history.

EDITED TO ADD (2/13): Some more commentary.

Posted on January 30, 2009 at 6:19 AM45 Comments

Helping the Terrorists

It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around. This isn’t the first time Google Earth has been charged with helping terrorists: in 2007, Google Earth images of British military bases were found in the homes of Iraqi insurgents. Incidents such as these have led many governments to demand that Google remove or blur images of sensitive locations: military bases, nuclear reactors, government buildings, and so on. An Indian court has been asked to ban Google Earth entirely.

This isn’t the only way our information technology helps terrorists. Last year, a US army intelligence report worried that terrorists could plan their attacks using Twitter, and there are unconfirmed reports that the Mumbai terrorists read the Twitter feeds about their attacks to get real-time information they could use. British intelligence is worried that terrorists might use voice over IP services such as Skype to communicate. Terrorists may train on Second Life and World of Warcraft. We already know they use websites to spread their message and possibly even to recruit.

Of course, all of this is exacerbated by open-wireless access, which has been repeatedly labelled a terrorist tool and which has been the object of attempted bans.

Mobile phone networks help terrorists, too. The Mumbai terrorists used them to communicate with each other. This has led some cities, including New York and London, to propose turning off mobile phone coverage in the event of a terrorist attack.

Let’s all stop and take a deep breath. By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it’s generally impossible to tell which is which. When I send and receive email, it looks exactly the same as a terrorist doing the same thing. To the mobile phone network, a call from one terrorist to another looks exactly the same as a mobile phone call from one victim to another. Any attempt to ban or limit infrastructure affects everybody. If India bans Google Earth, a future terrorist won’t be able to use it to plan; nor will anybody else. Open Wi-Fi networks are useful for many reasons, the large majority of them positive, and closing them down affects all those reasons. Terrorist attacks are very rare, and it is almost always a bad trade-off to deny society the benefits of a communications technology just because the bad guys might use it too.

Communications infrastructure is especially valuable during a terrorist attack. Twitter was the best way for people to get real-time information about the attacks in Mumbai. If the Indian government shut Twitter down – or London blocked mobile phone coverage – during a terrorist attack, the lack of communications for everyone, not just the terrorists, would increase the level of terror and could even increase the body count. Information lessens fear and makes people safer.

None of this is new. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are – by and large – small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response – just as we would if we banned cars because bank robbers used them too.

This essay originally appeared in The Guardian.

EDITED TO ADD (1/29): Other ways we help the terrorists: we put computers in our libraries, we allow anonymous chat rooms, we permit commercial databases and we engage in biomedical research. Grocery stores, too, sell food to just anyone who walks in.

EDITED TO ADD (2/3): Washington DC wants to jam cell phones too.

EDITED TO ADD (2/9): Another thing that will help the terrorists: in-flight Internet.

Posted on January 29, 2009 at 6:00 AM54 Comments

The Exclusionary Rule and Security

Earlier this month, the Supreme Court ruled that evidence gathered as a result of errors in a police database is admissible in court. Their narrow decision is wrong, and will only ensure that police databases remain error-filled in the future.

The specifics of the case are simple. A computer database said there was a felony arrest warrant pending for Bennie Herring when there actually wasn’t. When the police came to arrest him, they searched his home and found illegal drugs and a gun. The Supreme Court was asked to rule whether the police had the right to arrest him for possessing those items, even though there was no legal basis for the search and arrest in the first place.

What’s at issue here is the exclusionary rule, which basically says that unconstitutionally or illegally collected evidence is inadmissible in court. It might seem like a technicality, but excluding what is called “the fruit of the poisonous tree” is a security system designed to protect us all from police abuse.

We have a number of rules limiting what the police can do: rules governing arrest, search, interrogation, detention, prosecution, and so on. And one of the ways we ensure that the police follow these rules is by forbidding the police to receive any benefit from breaking them. In fact, we design the system so that the police actually harm their own interests by breaking them, because all evidence that stems from breaking the rules is inadmissible.

And that’s what the exclusionary rule does. If the police search your home without a warrant and find drugs, they can’t arrest you for possession. Since the police have better things to do than waste their time, they have an incentive to get a warrant.

The Herring case is more complicated, because the police thought they did have a warrant. The error was not a police error, but a database error. And, in fact, Judge Roberts wrote for the majority: “The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence. The error in this case does not rise to that level.”

Unfortunately, Roberts is wrong. Government databases are filled with errors. People often can’t see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy. The legal argument for excluding this evidence was best made by an amicus curiae brief filed by the Electronic Privacy Information Center, but in short, the court should exclude the evidence because it’s the only way to ensure police database accuracy.

We are protected from becoming a police state by limits on police power and authority. This is not a trade-off we make lightly: we deliberately hamper law enforcement’s ability to do its job because we recognize that these limits make us safer. Without the exclusionary rule, your only remedy against an illegal search is to bring legal action against the police—and that can be very difficult. We, the people, would rather have you go free than motivate the police to ignore the rules that limit their power.

By not applying the exclusionary rule in the Herring case, the Supreme Court missed an important opportunity to motivate the police to purge errors from their databases. Constitutional lawyers have written many articles about this ruling, but the most interesting idea comes from George Washington University professor Daniel J. Solove, who proposes this compromise: “If a particular database has reasonable protections and deterrents against errors, then the Fourth Amendment exclusionary rule should not apply. If not, then the exclusionary rule should apply. Such a rule would create an incentive for law enforcement officials to maintain accurate databases, to avoid all errors, and would ensure that there would be a penalty or consequence for errors.”

Increasingly, we are being judged by the trail of data we leave behind us. Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren’t held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.

This essay originally appeared on the Wall Street Journal website.

EDITED TO ADD (2/1): More on the assault on the exclusionary rule.

EDITED TO ADD (2/9): Here’s another recent court case involving the exclusionary rule, and a thoughtful analysis by Orin Kerr.

Posted on January 28, 2009 at 7:12 AM108 Comments

A Rational Response to Peanut Allergies and Children

Some parents of children with peanut allergies are not asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an Epipen.

This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever.

It’s so rare to see rational risk management when it comes to children and safety; I just had to blog it.

Related blog post, including a very lively comments section.

Posted on January 27, 2009 at 2:10 PM55 Comments

Remote Fireworks Launcher

How soon before these people are accused of helping the terrorists?

With around a thousand people in the UK injured every year by fireworks, a new electronic remote control ‘Firework Launcher’ will put safety first and ensure everyone enjoys the Christmas and new year celebrations.This innovative, compact device dramatically reduces the chance of injury by launching fireworks without a flame and at a safe distance—so all you need to worry about is how spectacular those fireworks really are!

Do fireworks kill more people than terrorists each year? Probably.

Posted on January 27, 2009 at 12:34 PM46 Comments

Teaching Risk Analysis in School

Good points:

“I regard myself as part of a movement we call risk literacy,” Professor Spiegelhalter told The Times. “It should be a basic component of discussion about issues in media, politics and in schools.

“We should essentially be teaching the ability to deconstruct the latest media story about a cancer risk or a wonder drug, so people can work out what it means. Really, that should be part of everyone’s language.””

As an aspect of science, risk was “as important as learning about DNA, maybe even more important,” he said. “The only problem is putting it on the curriculum: that can be the kiss of death. At the moment we can do it as part of maths outreach, maths inspiration, which is a real privilege because we can make it fun. It’s not teaching to an exam. But I actually think it should be in there, partly to make the curriculum more interesting.”

Reminds me of John Paulos’s Innumeracy.

Posted on January 26, 2009 at 1:55 PM25 Comments

BitArmor's No-Breach Guarantee

BitArmor now comes with a security guarantee. They even use me to tout it:

“We think this guarantee is going to encourage others to offer similar ones. Bruce Schneier has been calling on the industry to do something like this for a long time,” he [BitArmor’s CEO] says.

Sounds good, until you read the fine print:

If your company has to publicly report a breach while your data is protected by BitArmor, we’ll refund the purchase price of your software. It’s that simple. No gimmicks, no hassles.

[…]

BitArmor cannot be held accountable for data breaches, publicly or otherwise.

So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors—BitArmor will refund the purchase price.

Bottom line: PR gimmick, nothing more.

Yes, I think that software vendors need to accept liability for their products, and that we won’t see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won’t happen without the insurance companies; that’s the industry that knows how to buy and sell liability.

EDITED TO ADD (2/13): BitArmor responds.

Posted on January 23, 2009 at 10:35 AM39 Comments

When Voting Machine Audit Logs Don't Help

Wow:

Computer audit logs showing what occurred on a vote tabulation system that lost ballots in the November election are raising more questions not only about how the votes were lost, but also about the general reliability of voting system audit logs to record what occurs during an election and to ensure the integrity of results.

The logs, which Threat Level obtained through a public records request from Humboldt County, California, are produced by the Global Election Management System, the tabulation software, also known as GEMS, that counts the votes cast on all voting machines—touch-screen and optical-scan machines—made by Premier Election Solutions (formerly called Diebold Election Systems).

The article gets pretty technical, but is worth reading.

Posted on January 23, 2009 at 7:43 AM39 Comments

New Police Computer System Impeding Arrests

In Queensland, Australia, policemen are arresting fewer people because their new data-entry system is too annoying:

He said police were growing reluctant to make arrests following the latest phased roll-out of QPRIME, or Queensland Police Records Information Management Exchange.

“They are reluctant to make arrests and they’re showing a lot more discretion in the arrests they make because QPRIME is so convoluted to navigate,” Mr Leavers said. He said minor street offences, some traffic offences and minor property matters were going unchallenged, but not serious offences.

However, Mr Leavers said there had been occasions where offenders were released rather than kept in custody because of the length of time it now took to prepare court summaries.

“There was an occasion where two people were arrested on multiple charges. It took six detectives more than six hours to enter the details into QPRIME,” he said. “It would have taken even longer to do the summary to go to court the next morning, so basically the suspects were released on bail, rather than kept in custody.”

He said jobs could now take up to seven hours to process because of the amount of data entry involved.

This is a good example of how non-security incentives affect security decisions.

Posted on January 22, 2009 at 1:51 PM27 Comments

Breach Notification Laws

There are three reasons for breach notification laws. One, it’s common politeness that when you lose something of someone else’s, you tell him. The prevailing corporate attitude before the law—”They won’t notice, and if they do notice they won’t know it’s us, so we are better off keeping quiet about the whole thing”—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

That last point needs a bit of explanation. The problem with companies protecting your data is that it isn’t in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company’s security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

So how has it worked?

Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.

I think there’s a combination of things going on. Identity theft is being reported far more today than five years ago, so it’s difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone’s home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn’t make much of a difference, reducing the effect of these laws.

The laws rely on public shaming. It’s embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there’s an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint’s stock tanked, and it was shamed into improving its security.

Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like “crying wolf” and soon, data breaches were no longer news.

Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

I’m still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it’s to make it difficult to use.

Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

This is the second half of a point/counterpoint with Marcus Ranum. Marcus’s essay is here.

Posted on January 21, 2009 at 6:59 AM39 Comments

In-Person Credit Card Scam

Surely this isn’t new:

Suspects entered the business, selected merchandise worth almost $8,000. They handed a credit card with no financial backing to the clerk which when swiped was rejected by the cash register’s computer. The suspects then informed the clerk that this rejection was expected and to contact the credit card company by phone to receive a payment approval confirmation code. The clerk was then given a number to call which was answered by another person in the scam who approved the purchase and gave a bogus confirmation number. The suspects then left the store with the unpaid for merchandise.

Anyone reading this blog would know enough not to call a number given to you by the potential purchaser, but presumably many store clerks don’t have good security sense.

Posted on January 19, 2009 at 1:23 PM40 Comments

"The Cost of Fearing Strangers"

Excellent essay from the Freakonomics blog:

As we wrote in Freakonomics, most people are pretty terrible at risk assessment. They tend to overstate the risk of dramatic and unlikely events at the expense of more common and boring (if equally devastating) events. A given person might fear a terrorist attack and mad cow disease more than anything in the world, whereas in fact she’d be better off fearing a heart attack (and therefore taking care of herself) or salmonella (and therefore washing her cutting board thoroughly).

Why do we fear the unknown more than the known? That’s a larger question than I can answer here (not that I’m capable anyway), but it probably has to do with the heuristics—the shortcut guesses—our brains use to solve problems, and the fact that these heuristics rely on the information already stored in our memories.

And what gets stored away? Anomalies—the big, rare, “black swan” events that are so dramatic, so unpredictable, and perhaps world-changing, that they imprint themselves on our memories and con us into thinking of them as typical, or at least likely, whereas in fact they are extraordinarily rare.

Nothing I haven’t said before. Remember, if it’s in the news don’t worry about it. The very definition of news is “something that almost never happens.” When something is so common that it’s no longer news—car crashes, domestic violence—that’s when you should worry about it.

Posted on January 19, 2009 at 6:19 AM26 Comments

Michael Chertoff Claims that Hijackings were Routine Prior to 9/11

I missed this interview with DHS Secretary Michael Chertoff from December. It’s all worth reading, but I want to point out where he claims that airplane hijackings were routine prior to 9/11:

What I can tell you is that in the period prior to September 12, 2001, it was a regular, routine issue to have American aircraft hijacked or blown up from time to time, whether it was Lockerbie or TSA or TWA 857 [I believe he meant TWA 847 – Joel] or 9/11 itself. And we haven’t had even a serious attempt at a hijacking or bombing on an American plane since then.

BoingBoing provides the actual facts:

According to Airsafe.com, the last flight previous to 9/11 to be hijacked with fatalities from an American destination was a Pacific Southwest Airlines flight on December 7th, 1987. “Lockerbie” refers to Pan Am Flight 103 which was destroyed by a bomb over Scotland after departing from London Heathrow International Airport on its way to JFK, with screening done—as now—by an organization other than the TSA. TWA Flight 847 departed from Athens (Ellinikon) International Airport, also not under TSA oversight.

While Wikipedia’s list of aircraft hijackings may not be comprehensive—I cannot find a complete list from the FAA, which does not seem to list hijackings, including 9/11, in its Accidents & Incidents Data—the last incident of an American flight being hijacked was in 1994, when FedEx Flight 705 was hijacked by a disgruntled employee.

The implication that hijacking or bombing of American airline flights is a regular occurrence is not borne out by history, nor does it follow that increased screening by the TSA at airports has prevented more attacks since 9/11.

Posted on January 16, 2009 at 5:24 AM37 Comments

Economic Distress and Fear

This was the Quotation of the Day from January 12:

Part of the debtor mentality is a constant, frantically suppressed undercurrent of terror. We have one of the highest debt-to-income ratios in the world, and apparently most of us are two paychecks from the street. Those in power—governments, employers—exploit this, to great effect. Frightened people are obedient—not just physically, but intellectually and emotionally. If your employer tells you to work overtime, and you know that refusing could jeopardize everything you have, then not only do you work the overtime, but you convince yourself that you’re doing it voluntarily, out of loyalty to the company; because the alternative is to acknowledge that you are living in terror. Before you know it, you’ve persuaded yourself that you have a profound emotional attachment to some vast multinational corporation: you’ve indentured not just your working hours, but your entire thought process. The only people who are capable of either unfettered action or unfettered thought are those who—either because they’re heroically brave, or because they’re insane, or because they know themselves to be safe—are free from fear.

Quote is from The Likeness, a novel set in Ireland, by Tana French.

Posted on January 15, 2009 at 6:14 AM60 Comments

Michael Chertoff Parodied in The Onion

Funny:

“While 9/11 has historically always fallen on 9/11, we as Americans need to be prepared for a wide range of dates,” Chertoff said during a White House press conference. “There’s a chance we could all find ourselves living in a post-6/10 world as early as next July. Unless, that is, we’re already living in a pre-2/14 world.”

“1/1, 1/2, 1/3, 1/4, 1/5,” Chertoff continued for nearly 45 minutes, “12/28, 12/29, 12/30, 12/31—these are all plausible and serious threats.”

Not very far from reality. Refuse to be terrorized, everyone.

Posted on January 14, 2009 at 12:04 PM31 Comments

Two Security Camera Studies

From San Francisco:

San Francisco’s Community Safety Camera Program was launched in late 2005 with the dual goals of fighting crime and providing police investigators with a retroactive investigatory tool. The program placed more than 70 non-monitored cameras in mainly high-crime areas throughout the city. This report released today (January 9, 2009) consists of a multi-disciplinary collaboration examining the program’s technical aspects, management and goals, and policy components, as well as a quasi-experimental statistical evaluation of crime reports in order to provide a comprehensive evaluation of the program’s effectiveness. The results find that while the program did result in a 20% reduction in property crime within the view of the cameras, other forms of crime were not affected, including violent crime, one of the primary targets of the program.

From the UK:

The first study of its kind into the effectiveness of surveillance cameras revealed that almost every Scotland Yard murder inquiry uses their footage as evidence.

In 90 murder cases over a one year period, CCTV was used in 86 investigations, and senior officers said it helped to solve 65 cases by capturing the murder itself on film, or tracking the movements of the suspects before or after an attack.

In a third of the cases a good quality still image was taken from the footage from which witnesses identified the killer.

My own writing on security cameras is here. The question isn’t whether they’re useful or not, but whether their benefits are worth the costs.

Posted on January 13, 2009 at 6:58 AM33 Comments

Shaping the Obama Administration's Counterterrorism Strategy

I’m at a two-day conference: Shaping the Obama Adminstration’s Counterterrorism Strategy, sponsored by the Cato Institute in Washington, DC. It’s sold out, but you can watch or listen to the event live on the Internet. I’ll be on a panel tomorrow at 9:00 AM.

I’ve been told that there’s a lively conversation about the conference on Twitter, but—as I have previously said—I don’t Twitter.

Posted on January 12, 2009 at 12:44 PM25 Comments

DHS's Files on Travelers

This is interesting:

I had been curious about what’s in my travel dossier, so I made a Freedom of Information Act (FOIA) request for a copy. I’m posting here a few sample pages of what officials sent me.

My biggest surprise was that the Internet Protocol (I.P.) address of the computer used to buy my tickets via a Web agency was noted. On the first document image posted here, I’ve circled in red the I.P. address of the computer used to buy my pair of airline tickets.

[…]

The rest of my file contained details about my ticketed itineraries, the amount I paid for tickets, and the airports I passed through overseas. My credit card number was not listed, nor were any hotels I’ve visited. In two cases, the basic identifying information about my traveling companion (whose ticket was part of the same purchase as mine) was included in the file. Perhaps that information was included by mistake.

Posted on January 12, 2009 at 5:15 AM23 Comments

Friday Squid Blogging: Bizarre Squid Reproductive Habits

Lots of them:

Hoving investigated the reproductive techniques of no fewer than ten different squids and related cuttlefish—from the twelve-metre long giant squid to a mini-squid of no more than twenty-five millimetres in length. Along the way he made a number of remarkable discoveries. Hoving: “Reproduction is no fun if you’re a squid. With one species, the Taningia danae, I discovered that the males give the females cuts of at least 5 centimetres deep in their necks with their beaks or hooks—they don’t have suction pads. They then insert their packets of sperm, also called spermatophores, into the cuts.”

Posted on January 9, 2009 at 4:06 PM6 Comments

Impersonation

Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?

Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PM34 Comments

Allocating Resources: Financial Fraud vs. Terrorism

Interesting trade-off:

The FBI has been forced to transfer agents from its counter-terrorism divisions to work on Bernard Madoff’s alleged $50 billion fraud scheme as victims of the biggest scam in the world continue to emerge.

The Freakonomics blog discusses this:

This might lead you to ask an obvious counter-question: Has the anti-terror enforcement since 9/11 in the U.S. helped fuel the financial meltdown? That is, has the diversion of resources, personnel, and mindshare toward preventing future terrorist attacks—including, you’d have to say, the wars in Afghanistan and Iraq—contributed to a sloppy stewardship of the financial industry?

It quotes a New York Times article:

Federal officials are bringing far fewer prosecutions as a result of fraudulent stock schemes than they did eight years ago, according to new data, raising further questions about whether the Bush administration has been too lax in policing Wall Street.

Legal and financial experts say that a loosening of enforcement measures, cutbacks in staffing at the Securities and Exchange Commission, and a shift in resources toward terrorism at the F.B.I. have combined to make the federal government something of a paper tiger in investigating securities crimes.

We’ve seen this problem over and over again when it comes to counterterrorism: in an effort to defend against the rare threats, we make ourselves more vulnerable to the common threats.

Posted on January 9, 2009 at 6:54 AM32 Comments

Biometrics

Biometrics may seem new, but they’re the oldest form of identification. Tigers recognize each other’s scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver’s licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.

What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There’s a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.

Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it’s important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It’s hard to affix a fake fingerprint to your finger or make your retina look like someone else’s. Some people can mimic voices, and make-up artists can change people’s faces, but these are specialized skills.

On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they’ve touched, and posted them on the Internet. We haven’t yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they’re not secrets.

And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn’t know that the signature isn’t valid because he didn’t see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there’s no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.

A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can’t inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.

Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.

The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there’s a guard with a large gun making sure no one is trying to fool the system.

Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn’t allow electronic forgeries. It worked very well.

One more problem with biometrics: they don’t fail well. Passwords can be changed, but if someone copies your thumbprint, you’re out of luck: you can’t update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you’re stuck. The failures don’t have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.

Biometrics are easy, convenient, and when used properly, very secure; they’re just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don’t.

This essay originally appeared in the Guardian, and is an update of an essay I wrote in 1998.

Posted on January 8, 2009 at 12:53 PM62 Comments

Reporting Unruly Football Fans via Text Message

This system is available in most NFL stadiums:

Fans still are urged to complain to an usher or call a security hotline in the stadium to report unruly behavior. But text-messaging lines—typically advertised on stadium scoreboards and on signs where fans gather—are aimed at allowing tipsters to surreptitiously alert security personnel via cellphone without getting involved with rowdies or missing part of a game.

As of this week, 29 of the NFL’s 32 teams had installed a text-message line or telephone hotline. Three clubs have neither: the New Orleans Saints, St. Louis Rams and Tennessee Titans. Ahlerich says he will “strongly urge” all clubs to have text lines in place for the 2009 season. A text line will be available at the Super Bowl for the first time when this season’s championship game is played at Tampa’s Raymond James Stadium on Feb. 1.

“If there’s someone around you that’s just really ruining your day, now you don’t have to sit there in silence,” says Jeffrey Miller, the NFL’s director of strategic security. “You can do this. It’s very easy. It’s quick. And you get an immediate response.”

The article talks a lot about false alarms and prank calls, but—in general—this seems like a good use of technology.

Posted on January 8, 2009 at 6:44 AM23 Comments

Kip Hawley Is Starting to Sound Like Me

Good quote:

“In the hurly-burly and the infinite variety of travel, you can end up with nonsensical results in which the T.S.A. person says, ‘Well, I’m just following the rules,'” Mr. Hawley said. “But if you have an enemy who is going to study your technology and your process, and if you have something they can figure out a way to get around, and they’re always figuring, then you have designed in a vulnerability.”

Posted on January 6, 2009 at 5:51 AM32 Comments

Trends in Counterfeit Currency

It’s getting worse:

More counterfeiters are using today’s ink-jet printers, computers and copiers to make money that’s just good enough to pass, he said, even though their product is awful.

In the past, he said, the best American counterfeiters were skilled printers who used heavy offset presses to turn out decent 20s, 50s and 100s. Now that kind of work is rare and almost all comes from abroad.

[…]

Green pointed to a picture hanging in his downtown conference room. It’s a photo from a 1980s Lenexa case that involved heavy printing presses and about 2 million fake dollars.

“That’s what we used to see,” he boomed. “That’s the kind of case we used to make.”

Agents discovered then that someone had purchased such equipment and a special kind of paper and it all went to the Lenexa shop. Then the agents secretly went in there with a court order and planted a tiny video camera on a Playboy calendar.

They streamed video 24/7 for days, stormed in with guns drawn and sent bad guys to federal prison.

Green’s voice sank as he described today’s sad-sack counterfeiters.

These people call up pictures of bills on their computers, buy paper at an office supply store and print out a few bills. They cut the bills apart, go into a store or bar and pass one or two.

Many offenders are involved with drugs, he said, often methamphetamine. If they get caught, so little money is involved that federal prosecutors won’t take the case.

It’s interesting. Counterfeits are becoming easier to detect while people are becoming less skilled in detecting it:

Part of the problem, Green said, is that the government has changed the money so much to foil counterfeiting. With all the new bills out there, citizens and even many police officers don’t know what they’re supposed to look like.

Moreover, many people see paper money less because they use credit or debit cards.

The result: Ink-jet counterfeiting accounted for 60 percent of $103 million in fake money removed from circulation from October 2007 to August 2008, the Secret Service reports. In 1995, the figure was less than 1 percent.

Another article on the topic.

Posted on January 5, 2009 at 6:34 AM46 Comments

Another Recently Released NSA Document

American Cryptology during the Cold War, 1945-1989, by Thomas R. Johnson: documents 1, 2, 3, 4, 5, and 6.

In response to a declassification request by the National Security Archive, the secretive National Security Agency has declassified large portions of a four-part “top-secret Umbra” study, American Cryptology during the Cold War. Despite major redactions, this history discloses much new information about the agency’s history and the role of SIGINT and communications intelligence (COMINT) during the Cold War. Researched and written by NSA historian Thomas Johnson, the three parts released so far provide a frank assessment of the history of the Agency and its forerunners, warts-and-all.

Posted on January 2, 2009 at 12:17 PM12 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.