Entries Tagged "terrorism"

Page 62 of 80

American Authorities Secretly Give International Travellers Terrorist "Risk" Score

From the Associated Press:

Without notifying the public, federal agents for the past four years have assigned millions of international travelers, including Americans, computer-generated scores rating the risk they pose of being terrorists or criminals.

The travelers are not allowed to see or directly challenge these risk assessments, which the government intends to keep on file for 40 years.

The scores are assigned to people entering and leaving the United States after computers assess their travel records, including where they are from, how they paid for tickets, their motor vehicle records, past one-way travel, seating preference and what kind of meal they ordered.

The program’s existence was quietly disclosed earlier in November when the government put an announcement detailing the Automated Targeting System, or ATS, for the first time in the Federal Register, a fine-print compendium of federal rules. Privacy and civil liberties lawyers, congressional aides and even law enforcement officers said they thought this system had been applied only to cargo.

Like all these systems, we are all judged in secret, by a computer algorithm, with no way to see or even challenge our score. Kafka would be proud.

“If this catches one potential terrorist, this is a success,” Ahern said.

That’s just too idiotic a statement to even rebut.

EDITED TO ADD (12/3): More commentary.

Posted on December 1, 2006 at 12:12 PMView Comments

Global Envelope

The DHS wants to share terrorist biometric information:

Robert Mocny, acting director of the U.S. Visitor and Immigrant Status Indicator Technology program, outlined a proposal under which the United States would begin exchanging information about terrorists first with closely allied governments in Britain, Europe and Japan ,and then progressively extend the program to other countries as a means of foiling terrorist attacks.

The Global Envelope proposal apparently opened the door to the exchange of biometric information about persons in this country to other governments and vice versa, in an environment where even officials’ pledges to observe privacy principles collide with inconsistent or absent legal protections.

In remarks to the International Conference on Biometrics and Ethics in Washington this afternoon, Mocny repeatedly stressed DHS’ commitment to observing privacy principles during the design and implementation of its biometric systems. “We have a responsibility to use this information wisely and responsibly,” he said.

Mocny cited the need to avoid duplication of effort by developing technical standards that all national biometric identification systems would use.

He emphasized repeatedly that information sharing is appropriate around the world on biometric methods of identifying terrorists who pose a risk to the public. Noting that his organization already receives information about terrorist threats from around the globe, Mocny said, “We have a responsibility to make a Global Security Envelope [that would coordinate information policies and technical standards.]”

Mocny conceded that each of the 10 privacy laws currently in effect in the United States has an exemption clause for national-security purposes. He added that the department only resorts to its essentially unlimited authority under those clauses when officials decide that there are compelling reasons to do so.

Anyone think that this will be any better than the no-fly list?

Posted on November 30, 2006 at 12:51 PMView Comments

Interesting Bioterrorism Drill

Earlier this month there was a bioterrorism drill in Seattle. Postal carriers delivered dummy packages to “nearly thousands” of people (yes, that’s what the article said; my guess is “nearly a thousand”), testing how the postal system could be used to quickly deliver medications. (Here’s a reaction from a recipient.)

Sure, there are lots of scenarios where this kind of delivery system isn’t good enough, but that’s not the point. In general, I think emergency response is one of the few areas where we need to spend more money. And, in general, I think tests and drills like this are good—how else will we know if the systems will work the way we think they will?

Posted on November 27, 2006 at 1:44 PMView Comments

Perceived Risk vs. Actual Risk

I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)

The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:

  1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.

    That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

  2. We over-react to things that offend our morals.

    When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

    He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.

  3. We over-react to immediate threats and under-react to long-term threats.

    The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

    Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

  4. We under-react to changes that occur slowly and over time.

    The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

It’s interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:

  • People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
  • People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naïve and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
  • Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, “A single death is a tragedy, a million deaths is a statistic.” He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine—complete with human-interest stories about their lives and families—suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
  • People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands—even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
  • Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide—about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.

Posted on November 3, 2006 at 7:18 AMView Comments

Forge Your Own Boarding Pass

Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest—the most visible by Rep. Edward Markey (D-Massachusetts)—who has since recanted. And it’s gotten him more publicity than he ever dreamed of.

All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.

This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it as well. I wrote about it in the August 2003 issue of Crypto-Gram. It’s possible I was the first person to publish it, but I certainly wasn’t the first person to think of it.

It’s kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.

You can also use a fake boarding pass to fly on someone else’s ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else’s name to board the plane.

This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else’s name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent’s name, it won’t raise a flag on the no-fly list.

You can also use a fake boarding pass instead of your real one if you have the “SSSS” mark and want to avoid secondary screening, or if you don’t have a ticket but want to get into the gate area.

Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.

Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian’s website did was automate the process with a single airline’s boarding passes.

Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don’t think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?

As I wrote in 2005: “The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.”

The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.

But before we start spending time and money and Transportation Security Administration agents, let’s be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren’t so easy to circumvent. Identification is not a useful security measure here.

Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.

Unlike every other airplane security measure—including reinforcing cockpit doors, which could have prevented 9/11—the airlines didn’t resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: “Round trip, New York to Los Angeles, 11/21-30, male, $100.” Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.

So business is why we have the photo ID requirement in the first place, and business is why it’s so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let’s focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where’s the TSA’s response to all this?

The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we’ve got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.

This essay—my 30th for Wired.com—appeared today.

EDITED TO ADD (11/4): More news and commentary.

EDITED TO ADD (1/10): Great essay by Matt Blaze.

Posted on November 2, 2006 at 6:21 AMView Comments

Total Information Awareness Is Back

Remember Total Information Awareness?

In November 2002, the New York Times reported that the Defense Advanced Research Projects Agency (DARPA) was developing a tracking system called “Total Information Awareness” (TIA), which was intended to detect terrorists through analyzing troves of information. The system, developed under the direction of John Poindexter, then-director of DARPA’s Information Awareness Office, was envisioned to give law enforcement access to private data without suspicion of wrongdoing or a warrant.

TIA purported to capture the “information signature” of people so that the government could track potential terrorists and criminals involved in “low-intensity/low-density” forms of warfare and crime. The goal was to track individuals through collecting as much information about them as possible and using computer algorithms and human analysis to detect potential activity.

The project called for the development of “revolutionary technology for ultra-large all-source information repositories,” which would contain information from multiple sources to create a “virtual, centralized, grand database.” This database would be populated by transaction data contained in current databases such as financial records, medical records, communication records, and travel records as well as new sources of information. Also fed into the database would be intelligence data.

The public found it so abhorrent, and objected so forcefully, that Congress killed funding for the program in September 2003.

None of us thought that meant the end of TIA, only that it would turn into a classified program and be renamed. Well, the program is now called Tangram, and it is classified:

The government’s top intelligence agency is building a computerized system to search very large stores of information for patterns of activity that look like terrorist planning. The system, which is run by the Office of the Director of National Intelligence, is in the early research phases and is being tested, in part, with government intelligence that may contain information on U.S. citizens and other people inside the country.

It encompasses existing profiling and detection systems, including those that create “suspicion scores” for suspected terrorists by analyzing very large databases of government intelligence, as well as records of individuals’ private communications, financial transactions, and other everyday activities.

The information about Tangram comes from a government document looking for contractors to help design and build the system.

DefenseTech writes:

The document, which is a description of the Tangram program for potential contractors, describes other, existing profiling and detection systems that haven’t moved beyond so-called “guilt-by-association models,” which link suspected terrorists to potential associates, but apparently don’t tell analysts much about why those links are significant. Tangram wants to improve upon these methods, as well as investigate the effectiveness of other detection links such as “collective inferencing,” which attempt to create suspicion scores of entire networks of people simultaneously.

Data mining for terrorists has always been a dumb idea. And the existence of Tangram illustrates the problem with Congress trying to stop a program by killing its funding; it just comes back under a different name.

Posted on October 31, 2006 at 6:59 AMView Comments

Cheyenne Mountain Retired

Cheyenne Mountain was the United States’ underground command post, designed to survive a direct hit from a nuclear warhead. It’s a Cold War relic—built in the 1960s—and retiring the site is probably a good idea. But this paragraph gives me pause:

Keating said the new control room, in contrast, could be damaged if a terrorist commandeered a jumbo jet and somehow knew exactly where to crash it. But “how unlikely is that? We think very,” Keating said.

I agree that this is an unlikely terrorist target, but still.

Posted on October 25, 2006 at 4:35 PMView Comments

Air Cargo Security

BBC is reporting a “major” hole in air cargo security. Basically, cargo is being flown on passenger planes without being screened. A would-be terrorist could therefore blow up a passenger plane by shipping a bomb via FedEx.

In general, cargo deserves much less security scrutiny than passengers. Here’s the reasoning:

Cargo planes are much less of a terrorist risk than passenger planes, because terrorism is about innocents dying. Blowing up a planeload of FedEx packages is annoying, but not nearly as terrorizing as blowing up a planeload of tourists. Hence, the security around air cargo doesn’t have to be as strict.

Given that, if most air cargo flies around on cargo planes, then it’s okay for some small amount—assuming it’s random and assuming the shipper doesn’t know which packages beforehand—of cargo to fly as baggage on passenger planes. A would-be terrorist would be better off taking his bomb and blowing up a bus than shipping it and hoping it might possibly be put on a passenger plane.

At least, that’s the theory. But theory and practice are different.

The British system involves “known shippers”:

Under a system called “known shipper” or “known consignor” companies which have been security vetted by government appointed agents can send parcels by air, which do not have to be subjected to any further security checks.

Unless a package from a known shipper arouses suspicion or is subject to a random search it is taken on trust that its contents are safe.

But:

Captain Gary Boettcher, president of the US Coalition Of Airline Pilots Associations, says the “known shipper” system “is probably the weakest part of the cargo security today”.

“There are approx 1.5 million known shippers in the US. There are thousands of freight forwarders. Anywhere down the line packages can be intercepted at these organisations,” he said.

“Even reliable respectable organisations, you really don’t know who is in the warehouse, who is tampering with packages, putting parcels together.”

This system has already been exploited by drug smugglers:

Mr Adeyemi brought pounds of cocaine into Britain unchecked by air cargo, transported from the US by the Federal Express courier company. He did not have to pay the postage.

This was made possible because he managed to illegally buy the confidential Fed Ex account numbers of reputable and security cleared companies from a former employee.

An accomplice in the US was able to put the account numbers on drugs parcels which, as they appeared to have been sent by known shippers, arrived unchecked at Stansted Airport.

When police later contacted the companies whose accounts and security clearance had been so abused they discovered they had suspected nothing.

And it’s not clear that a terrorist can’t figure out which shipments are likely to be put on passenger aircraft:

However several large companies such as FedEx and UPS offer clients the chance to follow the progress of their parcels online.

This is a facility that Chris Yates, an expert on airline security for Jane’s Transport, says could be exploited by terrorists.

“From these you can get a fair indication when that package is in the air, if you are looking to get a package into New York from Heathrow at a given time of day.

And BBC reports that 70% of cargo is shipped on passenger planes. That seems like too high a number.

If we had infinite budget, of course we’d screen all air cargo. But we don’t, and it’s a reasonable trade-off to ignore cargo planes and concentrate on passenger planes. But there are some awfully big holes in this system.

Posted on October 24, 2006 at 6:11 AMView Comments

Perceived Risk vs. Actual Risk

Good essay on perceived vs. actual risk. The hook is Mayor Daley of Chicago demanding a no-fly-zone over Chicago in the wake of the New York City airplane crash.

Other politicians (with the spectacular and notable exception of New York City Mayor Michael Bloomberg) and self-appointed “experts” are jumping on the tragic accident—repeat, accident—in New York to sound off again about the “danger” of light aircraft, and how they must be regulated, restricted, banned.

OK, for all of those ranting about “threats” from GA aircraft, we’ll believe that you’re really serious about controlling “threats” when you call for:

  • Banning all vans within cities. A small panel van was used in the first World Trade Center attack. The bomb, which weighed 1,500 pounds, killed six and injured 1,042.
  • Banning all box trucks from cities. Timothy McVeigh’s rented Ryder truck carried a 5,000-pound bomb that killed 168 in Oklahoma City.
  • Banning all semi-trailer trucks. They can carry bombs weighing more than 50,000 pounds.
  • Banning newspapers on subways. That’s how the terrorists hid packages of sarin nerve gas in the Tokyo subway system. They killed 12.
  • Banning backpacks on all buses and subways. That’s how the terrorists got the bombs into the London subway system. They killed 52.
  • Banning all cell phones on trains. That’s how they detonated the bombs in backpacks placed on commuter trains in Madrid. They killed 191.
  • Banning all small pleasure boats on public waterways. That’s how terrorists attacked the USS Cole, killing 17.
  • Banning all heavy or bulky clothing in all public places. That’s how suicide bombers hide their murderous charges. Thousands killed.

Number of people killed by a terrorist attack using a GA aircraft? Zero.

Number of people injured by a terrorist attack using a GA aircraft? Zero.

Property damage from a terrorist attack using a GA aircraft? None.

So Mr. Mayor (and Mr. Governor, Ms. Senator, Mr. Congressman, and Mr. “Expert”), if you’re truly serious about “protecting” the public, advocate all of the bans I’ve listed above. Using the “logic” you apply to general aviation aircraft, you’re forced to conclude that newspapers, winter coats, cell phones, backpacks, trucks, and boats all pose much greater risks to the public.

So be consistent in your logic. If you are dead set on restricting a personal transportation system that carries more passengers than any single airline, reaches more American cities than all the airlines combined, provides employment for 1.3 million American citizens and $160 billion in business “to protect the public,” then restrict or control every other transportation system that the terrorists have demonstrated they can use to kill.

And, on the same topic, why it doesn’t make sense to ban small aircraft from cities as a terrorism defense.

Posted on October 23, 2006 at 10:01 AMView Comments

1 60 61 62 63 64 80

Sidebar photo of Bruce Schneier by Joe MacInnis.