Blog: August 2007 Archives

Friday Squid Blogging: Squid Chowder

Mmmmm:

Put a big heavy pot on the stove and get some heat under it. Fry up the bacon until it starts to get crispy. Toss in the onions. Stir around until they start to get soft. Pile in the potatoes. Pour in two cans of vegetable broth. Stir. Toss in the squid, the bay leaves and the other seasonings. Cook over medium heat, stirring now and then, until the squid is past the rubber band phase (about half an hour), then another ten minutes. About this time the skin will probably be coming off of the potato pieces. (I never peel potatoes). Pour in the milk and the evaporated milk. Medium low heat, stir occasionally until it is almost boiling. Extricate the bay leaves. Put the lid on the pot. Turn off the heat. Wait 15 minutes or until you can’t stand it any more. Ladle into bowls. Eat.

Posted on August 31, 2007 at 4:44 PM10 Comments

Computer Forensics Case Study

This is a report on the presentation of computer forensic evidence in a UK trial.

There are three things that concern me here:

  1. The computer was operated by a police officer prior to forensic examination.
  2. The forensic examiner gave an opinion on what files construed “radical Islamic politics.”
  3. The presence of documents”in the “Windows Options” folders was construed as evidence that that someone wanted to hide those documents

In general, computer forensics is rather ad hoc. Traditional rules of evidence are broken all the time. But this seems like a pretty egregious example.

Posted on August 31, 2007 at 6:13 AM45 Comments

Australian Porn Filter Cracked

The headline is all you need to know:

Teen cracks AU$84 million porn filter in 30 minutes

(AU$84 million is $69.5 million U.S.; that’s real money.)

Remember that the issue isn’t that one smart kid can circumvent the censorship software, it’s that one smart kid—maybe this one, maybe another one—can write a piece of shareware that allows everyone to circumvent the censorship software.

It’s the same with DRM; technical measures just aren’t going to work.

Posted on August 30, 2007 at 12:50 PM38 Comments

Entering Passwords Through Eye Movement

Interesting:

Reducing Shoulder-surfing by Using Gaze-based Password Entry

Manu Kumar , Tal Garfinkel, Dan Boneh, Terri Winograd

Abstract:

Shoulder-surfing—using direct observation techniques, such as looking over someone’s shoulder, to get passwords, PINs and other sensitive personal information is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user’s password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input. With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional approaches.

Posted on August 30, 2007 at 6:12 AM25 Comments

Technical Details on the FBI's Wiretapping Network

There’s a must-read article on Wired.com about DCSNet (Digital Collection System Network), the FBI’s high-tech point-and-click domestic wiretapping network. The information is based on nearly 1,000 pages of documentation released under FOIA to the EFF.

Together, the surveillance systems let FBI agents play back recordings even as they are being captured (like TiVo), create master wiretap files, send digital recordings to translators, track the rough location of targets in real time using cell-tower information, and even stream intercepts outward to mobile surveillance vans.

FBI wiretapping rooms in field offices and undercover locations around the country are connected through a private, encrypted backbone that is separated from the internet. Sprint runs it on the government’s behalf.

The network allows an FBI agent in New York, for example, to remotely set up a wiretap on a cell phone based in Sacramento, California, and immediately learn the phone’s location, then begin receiving conversations, text messages and voicemail pass codes in New York. With a few keystrokes, the agent can route the recordings to language specialists for translation.

The numbers dialed are automatically sent to FBI analysts trained to interpret phone-call patterns, and are transferred nightly, by external storage devices, to the bureau’s Telephone Application Database, where they’re subjected to a type of data mining called link analysis.

FBI endpoints on DCSNet have swelled over the years, from 20 “central monitoring plants” at the program’s inception, to 57 in 2005, according to undated pages in the released documents. By 2002, those endpoints connected to more than 350 switches.

Today, most carriers maintain their own central hub, called a “mediation switch,” that’s networked to all the individual switches owned by that carrier, according to the FBI. The FBI’s DCS software links to those mediation switches over the internet, likely using an encrypted VPN. Some carriers run the mediation switch themselves, while others pay companies like VeriSign to handle the whole wiretapping process for them.

Much, much more in the article. (And much chatter on this Slashdot thread.)

EDITED TO ADD (8/31): Commentary by Matt Blaze and Steve Bellovin.

Posted on August 29, 2007 at 11:39 AM26 Comments

Thieves Steal Drug-Sniffing Dog

Okay; this is clever:

Rex IV, a highly trained Belgian Malinois sheepdog with a string of drug hauls behind him, was checked on to a flight from Mexico City this week with seven other police dogs bound for an operation in the northern state of Sinaloa.

But when the dogs arrived at Mazatlan airport, Sinaloa, their police handlers discovered a small black mongrel puppy inside Rex IV’s cage, with the sniffer dog nowhere to be seen.

Whatever drug lord ordered that hit probably saved himself a whole lot of grief.

EDITED TO ADD (8/29): The dog was found in a park:

Working on a tip, federal police found Rex IV—a highly trained Belgian Malinois sheepdog with a string of drug hauls to its name—tied to a tree in a park in the gritty Iztapalapa neighborhood, a Public Security Ministry spokesman said.

“When they realized the police were onto them, they abandoned him in a park,” the spokesman told Reuters, adding that the dog’s identity was confirmed by scanning an embedded electronic chip.

Why didn’t they just slit the dog’s throat? I take it back: not so clever.

Posted on August 29, 2007 at 6:59 AM54 Comments

New German Hacking Law

There has been much written about the new German hacker-tool law, which went into effect earlier this month.

Dark Reading has the most interesting speculation:

Many security people say the law is so flawed and so broad and that no one can really comply with it. “In essence, the way the laws are phrased now, there is no way to ever comply… even as a non-security company,” says researcher Halvar Flake, a.k.a. Thomas Dullien, CEO and head of research at Sabre Security.

“If I walked into a store now and told the clerk that I wish to buy Windows XP and I will use it to hack, then the clerk is aiding me in committing a crime by [selling me] Windows XP,” Dullien says. “The law doesn’t actually distinguish between what the intended purpose of a program is. It just says if you put a piece of code in a disposition that is used to commit a crime, you’re complicit in that crime.”

Dullien says his company’s BinNavi tool for debugging and analyzing code or malware is fairly insulated from the law because it doesn’t include exploits. But his company still must ensure it doesn’t sell to “dodgy” customers.

Many other German security researchers, meanwhile, have pulled their proof-of-concept exploit code and hacking tools offline for fear of prosecution.

[…]

The German law has even given some U.S. researchers pause as well. It’s unclear whether the long arm of the German law could reach them, so some aren’t taking any chances: The exploit-laden Metasploit hacking tool could fall under German law if someone possesses it, distributes it, or uses it, for instance. “I’m staying out of Germany,” says HD Moore, Metasploit’s creator and director of security research for BreakingPoint Systems.

“Just about everything the Metasploit project provides [could] fall under that law,” Moore says. “Every exploit, most of the tools, and even the documentation in some cases.”

Moore notes that most Linux distros are now illegal in Germany as well, because they include the open-source nmap security scanner tool—and some include Metasploit as well.

The law basically leaves the door open to outlaw any software used in a crime, notes Sabre Security’s Dullien.

Zoller says the biggest problem with the new law is that it’s so vague that no one really knows what it means yet. “We have to wait for something to happen to know the limits.”

Posted on August 28, 2007 at 1:32 PM52 Comments

Mission Creep at Counterterrorism "Fusion Centers"

Fusion centers are state-run, with funding help from the Department of Homeland Security. It’s all sort of ad hoc, but their purpose is to “fuse” federal, state, and local intelligence against terrorism. But—no surprise—they’re not doing much actual fusion, and they’re more commonly used for other purposes.

From a Congressional Research Service report dated June 6, 2007:

Fusion centers are state-created entities largely financed and staffed by the states, and there is no one “model” for how a center should be structured. State and local law enforcement and criminal intelligence seem to be at the core of many of the centers. Although many of the centers initially had purely counterterrorism goals, for numerous reasons, they have increasingly gravitated toward an all-crimes and even broader all-hazards approach. While many of the centers have prevention of attacks as a high priority, little “true fusion,” or analysis of disparate data sources, identification of intelligence gaps, and pro-active collection of intelligence against those gaps which could contribute to prevention is occurring. Some centers are collocated with local offices of federal entities, yet in the absence of a functioning intelligence cycle process, collocation alone does not constitute fusion.

The federal role in supporting fusion centers consists largely of providing financial assistance, the majority of which has flowed through the Homeland Security Grant Program; sponsoring security clearances; providing human resources; producing some fusion center guidance and training; and providing congressional authorization and appropriation of national foreign intelligence program resources, is well as oversight hearings. This report includes over 30 options for congressional consideration to clarify and potentially enhance the federal government’s relationship with fusion centers. One of the central options is the potential drafting of a formal national fusion center strategy that would outline, among other elements, the federal government’s clear expectations of fusion centers, its position on sustainment funding, metrics for assessing fusion center performance, and definition of what constitutes a “mature” fusion center.

Honestly, the report itself is kind of boring, even for this sort of thing. There’s an interesting section on proactive vs. reactive security (p. 25):

Most fusion centers respond to incoming requests, suspicious activity reports, and/or finished information/intelligence products. This approach largely relies on data points or analysis that are already identified as potentially problematic. As mentioned above, it could be argued that this approach will only identify unsophisticated criminals and terrorists. The 2007 Fort Dix plot may serve as a good example—would law enforcement have ever become aware of this plot if the would-be perpetrators hadn’t taken their jihad video to a video store to have it copied? While state homeland security and law enforcement officials appear to have reacted quickly and passed the information to the FBI, would they have ever been able to find would-be terrorists within their midst if those individuals avoided activities, criminal or otherwise, that might bring to light their plot?

It is unclear if a single fusion center has successfully adopted a truly proactive prevention approach to information analysis and sharing.

Here’s another article on the topic.

Posted on August 28, 2007 at 6:30 AM13 Comments

Stupidest Terrorist Overreaction Yet?

What? Are the police taking stupid pills?

Two people who sprinkled flour in a parking lot to mark a trail for their offbeat running club inadvertently caused a bioterrorism scare and now face a felony charge.

The competition is fierce, but I think this is a winner.

What bothers me most about the news coverage is that there isn’t even a suggestion that the authorities’ response might have been out of line.

Mayoral spokeswoman Jessica Mayorga said the city plans to seek restitution from the Salchows, who are due in court Sept. 14.

“You see powder connected by arrows and chalk, you never know,” she said. “It could be a terrorist, it could be something more serious. We’re thankful it wasn’t, but there were a lot of resources that went into figuring that out.”

Translation: We screwed up, and we want someone to pay for our mistake.

Posted on August 27, 2007 at 2:34 PM124 Comments

Drug Testing an Entire Community

You won’t identity individual users, but you can test for the prevalence of drug use in a community by testing the sewage water.

Presumably, if you push the sample high enough into the pipe, you can test groups of houses or even individual houses.

EDITED TO ADD (7/13): Here’s information on drug numbers in the Rhine. They estimated that, for a population of 38,5 million feeding wastewater into the Rhine down to Düsseldorf, cocaine use amounts to 11 metric tonnes per year. Street value: 1.64 billion Euros.

Posted on August 24, 2007 at 12:35 PM42 Comments

Interview with National Intelligence Director Mike McConnell

Mike McConnell, U.S. National Intelligence Director, gave an interesting interview to the El Paso Times.

I don’t think he’s ever been so candid before. For example, he admitted that the nation’s telcos assisted the NSA in their massive eavesdropping efforts. We already knew this, of course, but the government has steadfastly maintained that either confirming or denying this would compromise national security.

There are, of course, moments of surreality. He said that it takes 200 hours to prepare a FISA warrant. Ryan Single calculated that since there were 2,167 such warrants in 2006, there must be “218 government employees with top secret clearances sitting in rooms, writing only FISA warrants.” Seems unlikely.

But most notable is this bit:

Q. So you’re saying that the reporting and the debate in Congress means that some Americans are going to die?

A. That’s what I mean. Because we have made it so public. We used to do these things very differently, but for whatever reason, you know, it’s a democratic process and sunshine’s a good thing. We need to have the debate.

Ah, the politics of fear. I don’t care if it’s the terrorists or the politicians, refuse to be terrorized. (More interesting discussions on the interview here, here, here, here, here, and here.)

Posted on August 24, 2007 at 6:30 AM41 Comments

"Cyberwar" in Estonia

I had been thinking about writing about the massive distributed-denial-of-service attack against the Estonian government last April. It’s been called the first cyberwar, although it is unclear that the Russian government was behind the attacks. And while I’ve written about cyberwar in general, I haven’t really addressed the Estonian attacks.

Now I don’t have to. Kevin Poulsen has written an excellent article on both the reality and the hype surrounding the attacks on Estonia’s networks, commenting on a story in the magazine Wired:

Writer Joshua Davis was dispatched to the smoking ruins of Estonia to assess the damage wrought by last spring’s DDoS attacks against the country’s web, e-mail and DNS servers. Josh is a talented writer, and he returned with a story that offers some genuine insights—a few, though, are likely unintentional.

We see, for example, that Estonia’s computer emergency response team responded to the junk packets with technical aplomb and coolheaded professionalism, while Estonia’s leadership … well, didn’t. Faced with DDoS and nationalistic, cross-border hacktivism—nuisances that have plagued the rest of the wired world for the better part of a decade—Estonia’s leaders lost perspective.

Here’s the best quote, from the speaker of the Estonian parliament, Ene Ergma: “When I look at a nuclear explosion, and the explosion that happened in our country in May, I see the same thing.”

[…]

While cooler heads were combating the first wave of Estonia’s DDoS attacks with packet filters, we learn, the country’s defense minister was contemplating invoking NATO Article 5, which considers an “armed attack” against any NATO country to be an attack against all. That might have obliged the U.S. and other signatories to go to war with Russia, if anyone was silly enough to take it seriously.

Fortunately, nobody important really is that silly. The U.S. has known about DDoS attacks since our own Web War One in 2000, when some our most trafficked sites—Yahoo, Amazon.com, E-Trade, eBay, and CNN.com—were attacked in rapid succession by Canada. (The culprit was a 15-year-old boy in Montreal).

As in Estonia years later, the attack took America’s leaders by surprise. President Clinton summoned some of the United States’ most respected computer security experts to the White House to meet and discuss options for shoring up the internet. At a photo op afterwards, a reporter lobbed Clinton a cyberwar softball: was this the “electronic Pearl Harbor?”

Estonia’s leaders, among others, could learn from the restraint of Clinton’s response. “I think it was an alarm,” he said. “I don’t think it was Pearl Harbor.

“We lost our Pacific fleet at Pearl Harbor.”

Read the whole thing.

Posted on August 23, 2007 at 1:18 PM15 Comments

First Responders

I live in Minneapolis, so the collapse of the Interstate 35W bridge over the Mississippi River earlier this month hit close to home, and was covered in both my local and national news.

Much of the initial coverage consisted of human interest stories, centered on the victims of the disaster and the incredible bravery shown by first responders: the policemen, firefighters, EMTs, divers, National Guard soldiers and even ordinary people, who all risked their lives to save others. (Just two weeks later, three rescue workers died in their almost-certainly futile attempt to save six miners in Utah.)

Perhaps the most amazing aspect of these stories is that there’s nothing particularly amazing about it. No matter what the disaster—hurricane, earthquake, terrorist attack—the nation’s first responders get to the scene soon after.

Which is why it’s such a crime when these people can’t communicate with each other.

Historically, police departments, fire departments and ambulance drivers have all had their own independent communications equipment, so when there’s a disaster that involves them all, they can’t communicate with each other. A 1996 government report said this about the first World Trade Center bombing in 1993: “Rescuing victims of the World Trade Center bombing, who were caught between floors, was hindered when police officers could not communicate with firefighters on the very next floor.”

And we all know that police and firefighters had the same problem on 9/11. You can read details in firefighter Dennis Smith’s book and 9/11 Commission testimony. The 9/11 Commission Report discusses this as well: Chapter 9 talks about the first responders’ communications problems, and commission recommendations for improving emergency-response communications are included in Chapter 12 (pp. 396-397).

In some cities, this communication gap is beginning to close. Homeland Security money has flowed into communities around the country. And while some wasted it on measures like cameras, armed robots and things having nothing to do with terrorism, others spent it on interoperable communications capabilities. Minnesota did that in 2004.

It worked. Hennepin County Sheriff Rich Stanek told the St. Paul Pioneer-Press that lives were saved by disaster planning that had been fine-tuned and improved with lessons learned from 9/11:

“We have a unified command system now where everyone—police, fire, the sheriff’s office, doctors, coroners, local and state and federal officials—operate under one voice,” said Stanek, who is in charge of water recovery efforts at the collapse site.

“We all operate now under the 800 (megahertz radio frequency system), which was the biggest criticism after 9/11,” Stanek said, “and to have 50 to 60 different agencies able to speak to each other was just fantastic.”

Others weren’t so lucky. Louisiana’s first responders had catastrophic communications problems in 2005, after Hurricane Katrina. According to National Defense Magazine:

Police could not talk to firefighters and emergency medical teams. Helicopter and boat rescuers had to wave signs and follow one another to survivors. Sometimes, police and other first responders were out of touch with comrades a few blocks away. National Guard relay runners scurried about with scribbled messages as they did during the Civil War.

A congressional report on preparedness and response to Katrina said much the same thing.

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25 percent of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80 percent of cities, municipal authorities couldn’t communicate with the FBI, FEMA and other federal agencies.

The source of the problem is a basic economic one, called the collective action problem. A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

Jerry Brito of George Mason University shows how this applies to first-responder communications. Each of the nation’s 50,000 or so emergency-response organizations—local police department, local fire department, etc.—buys its own communications equipment. As you’d expect, they buy equipment as closely suited to their needs as they can. Ensuring interoperability with other organizations’ equipment benefits the common good, but sacrificing their unique needs for that compatibility may not be in the best immediate interest of any of those organizations. There’s no central directive to ensure interoperability, so there ends up being none.

This is an area where the federal government can step in and do good. Too much of the money spent on terrorism defense has been overly specific: effective only if the terrorists attack a particular target or use a particular tactic. Money spent on emergency response is different: It’s effective regardless of what the terrorists plan, and it’s also effective in the wake of natural or infrastructure disasters.

No particular disaster, whether intentional or accidental, is common enough to justify spending a lot of money on preparedness for a specific emergency. But spending money on preparedness in general will pay off again and again.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/13): More research.

Posted on August 23, 2007 at 3:23 AM46 Comments

Perceptions of Risk

Another article about risk perception, and why we worry about the wrong things:

Newsrooms are full of English majors who acknowledge that they are not good at math, but still rush to make confident pronouncements about a global-warming “crisis” and the coming of bird flu.

Bird flu was called the No. 1 threat to the world. But bird flu has killed no one in America, while regular flu—the boring kind—kills tens of thousands. New York City internist Marc Siegel says that after the media hype, his patients didn’t want to hear that.

“I say, ‘You need a flu shot.’ You know the regular flu is killing 36,000 per year. They say, ‘Don’t talk to me about regular flu. What about bird flu?'”

Here’s another example. What do you think is more dangerous, a house with a pool or a house with a gun? When, for “20/20,” I asked some kids, all said the house with the gun is more dangerous. I’m sure their parents would agree. Yet a child is 100 times more likely to die in a swimming pool than in a gun accident.

Parents don’t know that partly because the media hate guns and gun accidents make bigger headlines. Ask yourself which incident would be more likely to be covered on TV.

Media exposure clouds our judgment about real-life odds. Of course, it doesn’t help that viewers are as ignorant about probability as reporters are.

Much of what’s written here I’ve said previously, and it echoes this article from Time Magazine (and also this great op-ed from the Los Angeles Times).

EDITED TO ADD (7/13): A great graphic.

Posted on August 22, 2007 at 1:43 PM69 Comments

Identification Technology in Personal-Use Tasers

Taser—yep, that’s the company’s name as well as the product’s name—is now selling a personal-use version of their product. It’s called the Taser C2, and it has an interesting embedded identification technology. Whenever the weapon is fired, it also sprays some serial-number bar-coded confetti, so a firing can be traced to a weapon and—presumably—the owner.

Anti-Felon Identification (AFID)

A system to deter misuse through enhanced accountability, AFID includes bar-coded serialization of each cartridge and disperses confetti-like ID tags upon activation.

Posted on August 22, 2007 at 6:57 AM50 Comments

Another E-Voting Problem: Not-Secret Ballots

Uh-oh:

Ohio law permits anyone to walk into a county election office and obtain two crucial documents: a list of voters in the order they voted, and a time-stamped list of the actual votes. “We simply take the two pieces of paper together, merge them, and then we have which voter voted and in which way,” said James Moyer, a longtime privacy activist and poll worker who lives in Columbus, Ohio.

EDITED TO ADD (9/13): Commentary by Ed Felton.

Posted on August 21, 2007 at 7:01 AM56 Comments

U.S. Government Threatens Retaliation Against States who Reject REAL ID

REAL ID is the U.S. government plan to impose uniform regulations on state driver’s licenses. It’s a national ID card, in all but cosmetic form. (Here is my essay on the security costs and benefits. These two sites are also good resources.)

Most states hate it: 17 have passed legislation rejecting REAL ID, and many others have such legislation somewhere in process. Now it looks like the federal government is upping the ante, and threatening retaliation against those states that don’t implement REAL ID:

The cards would be mandatory for all “federal purposes,” which include boarding an airplane or walking into a federal building, nuclear facility or national park, Homeland Security Secretary Michael Chertoff told the National Conference of State Legislatures last week. Citizens in states that don’t comply with the new rules will have to use passports for federal purposes.

This sounds tough, but it’s a lot of bluster. The states that have passed anti-REAL-ID legislation lean both Republican and Democrat. The federal government just can’t say that citizens of—for example—Georgia (which passed a bill in May authorizing the Governor to delay implementation of REAL ID) can’t walk into a federal courthouse without a passport. Or can’t board an airplane without a passport—imagine the lobbying by Delta Airlines here. They just can’t.

Posted on August 20, 2007 at 6:01 AM93 Comments

DefCon Badge Auction

I am auctioning my DefCon speaker badge on eBay.

The curious phrasing—”upon completion of this auction, Schneier will donate an amount equal to the purchase price to the Electronic Privacy Information Center”—is because eBay has complex rules for charity auctions. So, technically, I am not donating the proceeds of the auction; I am donating a completely different pile of money equal to the proceeds of the auction.

EDITED TO ADD (8/22): Sold for $335. Thank you all.

Posted on August 18, 2007 at 10:57 AM18 Comments

Friday Squid Blogging: The Word of the Day is "Squid"

At least it was on August 13:

“NBC Nightly News” anchor Brian Williams had a cameo on “Sesame Street” today, introducing the word of the day, which was “squid.” Just in case there was any confusion, he said the word “squid” 19 times. Squid squid squid squid squid!

There’s video at that link, too. You can watch him ending his report with the words: “Good day, and good squid.”

Another link.

Posted on August 17, 2007 at 3:52 PM6 Comments

On the Ineffectiveness of Security Cameras

Information from San Francisco public housing developments:

The 178 video cameras that keep watch on San Francisco public housing developments have never helped police officers arrest a homicide suspect even though about a quarter of the city’s homicides occur on or near public housing property, city officials say.

Nobody monitors the cameras, and the videos are seen only if police specifically request it from San Francisco Housing Authority officials. The cameras have occasionally managed to miss crimes happening in front of them because they were trained in another direction, and footage is particularly grainy at night when most crime occurs, according to police and city officials.

Similar concerns have been raised about the 70 city-owned cameras located at high-crime locations around San Francisco.

[…]

Four homicides have occurred in the past 12 months at the intersection of Laguna and Eddy streets—at the corner of the Plaza East public housing development—including the daytime killing of a 19-year-old in May. A security camera is trained on that corner but so far has not proven useful in making any arrests, Mirkarimi said.

Both the Housing Authority and city have many security cameras in the area, and it wasn’t clear Monday whether the camera in question was purchased by the Housing Authority or city. In any case, the camera hasn’t helped make arrests in the crimes, Mirkarimi said.

“They’re feeling strongly that they don’t work,” Mirkarimi said of Western Addition residents’ views of the security cameras. “They’re just apoplectic why they can’t figure out why nothing comes of this.”

He added that he thinks the cameras may have “a scarecrow effect” in that they give residents the feeling they are safer when they actually have little impact on crime.

That’s not a scarecrow effect. A scarecrow is security theater that works: something that doesn’t actually prevent crime, but deters it by scaring off criminals. Mirkarimi is saying that they have the opposite effect; the cameras make victims feel safer than they really are.

Posted on August 17, 2007 at 1:25 PM30 Comments

Wholesale Automobile Surveillance Comes to New York City

New York is installing an automatic toll-collection system for cars in the busiest parts of the city. It’s called congestion pricing, and it promises to reduce both traffic and pollution.

The problem is that it keeps an audit log of which cars are driving where. London’s congestion pricing system is already being used for counterterrorism purposes—and now for regular crime as well. The EZPass automatic toll collection system, used in New York and other places, has been used to prove infidelity in divorce court.

There are good reasons for having this system, but I am worried about another wholesale surveillance tool.

EDITED TO ADD (9/4): EZPass records have been used in criminal court as well.

Posted on August 17, 2007 at 6:48 AM26 Comments

Vague Threat Prompts Overreaction

It reads like a hoax:

The Police Department set up checkpoints yesterday in Lower Manhattan and increased security after learning of a vague threat of a radiological attack here.

[…]

The police learned about the threat through an item on the Web site debka.com—a site that Mr. Browne said was believed to have Israeli intelligence and military sources—that said that Qaeda operatives were planning to detonate a truck filled with radiological material in New York, Los Angeles or Miami. Officials say the Web site carries reports that are often wrong, but occasionally right.

Occasionally right? Which U.S. terrorist attack did it predict?

Come on, people: refuse to be terrorized.

Posted on August 16, 2007 at 6:04 AM39 Comments

Security Theater

Nice article on security theater from Government Executive:

John Mueller suspects he might have become cable news programs’ go-to foil on terrorism. The author of Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them (Free Press, 2006) thinks America has overreacted. The greatly exaggerated threat of terrorism, he says, has cost the country far more than terrorist attacks ever did.

Watching his Sept. 12, 2006, appearance on Fox & Friends is unintentionally hilarious. Mueller calmly and politely asks the hosts to at least consider his thesis. But filled with alarm and urgency, they appear bewildered and exasperated. They speak to Mueller as if he is from another planet and cannot be reasoned with.

That reaction is one measure of the contagion of alarmism. Mueller’s book is filled with statistics meant to put terrorism in context. For example, international terrorism annually causes the same number of deaths as drowning in bathtubs or bee stings. It would take a repeat of Sept. 11 every month of the year to make flying as dangerous as driving. Over a lifetime, the chance of being killed by a terrorist is about the same as being struck by a meteor. Mueller’s conclusions: An American’s risk of dying at the hands of a terrorist is microscopic. The likelihood of another Sept. 11-style attack is nearly nil because it would lack the element of surprise. America can easily absorb the damage from most conceivable attacks. And the suggestion that al Qaeda poses an existential threat to the United States is ridiculous. Mueller’s statistics and conclusions are jarring only because they so starkly contradict the widely disseminated and broadly accepted image of terrorism as an urgent and all-encompassing threat.

American reaction to two failed attacks in Britain in June further illustrates our national hysteria. British police found and defused two car bombs before they could be detonated, and two would-be bombers rammed their car into a terminal at Glasgow Airport. Even though no bystanders were hurt and British authorities labeled both episodes failures, the response on American cable television and Capitol Hill was frenzied, frequently emphasizing how many people could have been killed. “The discovery of a deadly car bomb in London today is another harsh reminder that we are in a war against an enemy that will target us anywhere and everywhere,” read an e-mailed statement from Sen. Joe Lieberman, I-Conn. “Terrorism is not just a threat. It is a reality, and we must confront and defeat it.” The bombs that never detonated were “deadly.” Terrorists are “anywhere and everywhere.” Even those who believe it is a threat are understating; it’s “more than a threat.”

Mueller, an Ohio State University political science professor, is more analytical than shrill. Politicians are being politicians, and security businesses are being security businesses, he says. “It’s just like selling insurance – you say, ‘Your house could burn down.’ You don’t have an incentive to say, ‘Your house will never burn down.’ And you’re not lying,” he says. Social science research suggests that humans tend to glom onto the most alarmist perspective even if they are told how unlikely it is, he adds. We inflate the danger of things we don’t control and exaggerate the risk of spectacular events while downplaying the likelihood of common ones. We are more afraid of terrorism than car accidents or street crime, even though the latter are far more common. Statistical outliers like the Sept. 11 terrorist attacks are viewed not as anomalies, but as harbingers of what’s to come.

Lots more in the article.

Posted on August 15, 2007 at 6:18 AM42 Comments

Phishing Studies

Two studies. The first one looks at social phishing:

Test subjects received an e-mail with headers spoofed so that it appeared to originate from a member of the subject’s social network. The message body was comprised of the phrase “hey, check this out!” along with a link to a site ostensibly at Indiana University. The link, however, would direct browsers to www.whuffo.com, where they were asked to enter their Indiana username and password. Control subjects were sent the same message originating from a fictitious individual at the university.

The results were striking: apparently, if the friends of a typical college student are jumping off a cliff, the student would too. Even though the spoofed link directed browsers to an unfamiliar .com address, having it sent by a familiar name sent the success rate up from 16 percent in controls to over 70 percent in the experimental group. The response was quick, with the majority of successful phishes coming within the first 12 hours. Victims were also persistent; all responses received a busy server message, but many individuals continued to visit and supply credentials for hours (one individual made 80 attempts).

Females were about 10 percent more likely to be victims in the study, but male students were suckers for their female friends, being 15 percent more likely to respond to phishes from women than men. Education majors had the smallest disparity between experimental and control members, but that’s in part because those majors fell for the control phish half the time. Science majors had the largest disparity—there were no control victims, but the phish had an 80 percent success rate in the experimental group.

Okay, so no surprise there. But this is interesting research into how who we trust can be exploited. If the phisher knows a little bit about you, he can more effectively target your friends.

And we all know that some men are suckers for what women tell them.

Another study looked at the practice of using the last four digits of a credit-card number as an authenticator. Seems that people also trust those who know the first four digits of their credit-card number:

Jakobsson also found a problem related to the practice of credit card companies identifying users by the last four digits of their account numbers, which are random. From his research, it turns out people are willing to respond to fraudulent e-mails if the attacker correctly identifies the first four digits of their account numbers, even though the first four are not random and are based on who issued thecard.

“People think [the phrase] ‘starting with’ is just as good as ‘ending with,’ which of course is remarkable insight,” he said.

Another attack comes to mind. You can write a phishing e-mail that simply guesses the last four digits of someone’s credit-card number. You’ll only be right one in ten thousand times, but if you send enough e-mails that might be enough.

EDITED TO ADD (8/14): Math typo fixed.

Posted on August 14, 2007 at 11:45 AM37 Comments

Conspiracy Theories

Fascinating New Scientist article (for subscribers only, but there’s a copy here) on conspiracy theories, and why we believe them:

So what kind of thought processes contribute to belief in conspiracy theories? A study I carried out in 2002 explored a way of thinking sometimes called “major event – major cause” reasoning. Essentially, people often assume that an event with substantial, significant or wide-ranging consequences is likely to have been caused by something substantial, significant or wide-ranging.

I gave volunteers variations of a newspaper story describing an assassination attempt on a fictitious president. Those who were given the version where the president died were significantly more likely to attribute the event to a conspiracy than those who read the one where the president survived, even though all other aspects of the story were equivalent.

To appreciate why this form of reasoning is seductive, consider the alternative: major events having minor or mundane causes—for example, the assassination of a president by a single, possibly mentally unstable, gunman, or the death of a princess because of a drunk driver. This presents us with a rather chaotic and unpredictable relationship between cause and effect. Instability makes most of us uncomfortable; we prefer to imagine we live in a predictable, safe world, so in a strange way, some conspiracy theories offer us accounts of events that allow us to retain a sense of safety and predictability.

Other research has examined how the way we search for and evaluate evidence affects our belief systems. Numerous studies have shown that in general, people give greater attention to information that fits with their existing beliefs, a tendency called “confirmation bias.” Reasoning about conspiracy theories follows this pattern, as shown by research I carried out with Marco Cinnirella at the Royal Holloway University of London, which we presented at the British Psychological Society conference in 2005.

The study, which again involved giving volunteers fictional accounts of an assassination attempt, showed that conspiracy believers found new information to be more plausible if it was consistent with their beliefs. Moreover, believers considered that ambiguous or neutral information fitted better with the conspiracy explanation, while non-believers felt it fitted better with the non-conspiracy account. The same piece of evidence can be used by different people to support very different accounts of events.

This fits with the observation that conspiracy theories often mutate over time in light of new or contradicting evidence. So, for instance, if some new information appears to undermine a conspiracy theory, either the plot is changed to make it consistent with the new information, or the theorists question the legitimacy of the new information. Theorists often argue that those who present such information are themselves embroiled in the conspiracy. In fact, because of my research, I have been accused of being secretly in the pay of various western intelligence services (I promise, I haven’t seen a penny).

Lots of good stuff in the article, including instructions on how to create your own conspiracy theory.

Posted on August 14, 2007 at 6:17 AM50 Comments

Paid Informants in Muslim Communities

This is a good article about the use of paid informants in Muslim communities, and how they are both creating potential terrorists where none existed before and sowing mistrust among people.

Defense lawyers in a number of other terrorism suspect cases accused informants of solely seeking financial boon by creating so-called terrorists that did not exist.

According to court records, Eldawoody was paid $100,000 over a period of 3 years.

Since Siraj’s conviction, Eldawoody has his rent covered and receives a monthly stipend of $3,200.

According to The Washington Post, a police spokesman indicated the direct payments to Eldawoody would likely continue “indefinitely.”

With such incentives, critics argue, informants are likely to be created out of thin air to join the “inform-and-cash” industry.

Meanwhile, the Muslim community across the country is feeling the heat of being closely watched.

“This is creating mistrust between our community and law enforcement officials,” Ayloush said.

In light of their extensive criminal records, Ayloush added, these individuals would neither qualify as police officers nor as FBI agents, yet they are on the payroll of law enforcement agencies and are allowed to do law enforcement work.

“We all respect hardworking law enforcement agents,” Ayloush said. “But mercenary informants? Hardly.”

Posted on August 13, 2007 at 12:50 PM28 Comments

House of Lords on Computer Security

The Science and Technology Committee of the UK House of Lords has issued a report (pdf here) on “Personal Internet Security.” It’s 121 pages long. Richard Clayton, who helped the committee, has a good summary of the report on his blog. Among other things, the Lords recommend various consumer notification standards, a data-breach disclosure law, and a liability regime for software.

Another summary lists:

  • Increase the resources and skills available to the police and criminal justice system to catch and prosecute e-criminals.
  • Establish a centralised and automated system, administered by law enforcement, for the reporting of e-crime.
  • Provide incentives to banks and other companies trading online to improve the data security by establishing a data security breach notification law.
  • Improve standards of new software and hardware by moving towards legal liability for damage resulting from security flaws.
  • Encourage Internet Service Providers to improve customer security offered by establishing a “kite mark” for internet services.

If that sounds like a lot of the things I’ve been saying for years, there’s a reason for that. Earlier this year, I testified before the committee (transcript here), where I recommended some of these things. (Sadly, I didn’t get to wear a powdered wig.)

This report is a long way from anything even closely resembling a law, but it’s a start. Clayton writes:

The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.

If you’re interested, the entire body of evidence the committee considered is here (pdf version here). I don’t recommend reading it; it’s absolutely huge, and a lot of it is corporate drivel.

EDITED TO ADD (8/13): I have written about software liabilities before, here and here.

EDITED TO ADD (8/22): Good article here:

They agreed ‘wholeheartedly’ with security guru, and successful author, Bruce Schneier, that the activities of ‘legitimate researchers’ trying to ‘break things to learn to think like the bad guys’ should not be criminalized in forthcoming UK legislation, and they supported the pressing need for a data breach reporting law; in drafting such a law, the UK government could learn from lessons learnt in the US states that have such laws. Such a law should cover the banks, and other sectors, and not simply apply to “communication providers”—a proposal presently under consideration by the EU Commission, which the peers clearly believed would be ineffective in creating incentives to improve security across the board.

Posted on August 13, 2007 at 6:35 AM21 Comments

Airport Security Breach

One of the problems with airport security checkpoints is that the system is a single point of failure. If someone slips through, the only way to regain security is for the entire airport to be emptied and everyone searched again. This happens rarely, but when it does, it can close an airport for hours.

It happened today at the Charlotte airport.

One sentence struck me:

Passengers on another 15 planes that took off after the breach will have to go through screening again when they reach their destinations, the TSA said.

It’s understandable why the TSA would want to screen everybody once someone evades security: that person could give his contraband to someone else. And since the entire airport system is a single secure area—once you go through security at one airport, you are considered to be inside security at all airports—it makes sense for those passengers to be screened if they’re changing planes.

But it must feel weird to have to go through screening after flying, before being able to leave the airport.

Posted on August 10, 2007 at 11:12 AM36 Comments

Police Data Mining Done Right

It’s nice to find an example of the police using data mining correctly: not as security theater, but more as a business-intelligence tool:

When Munroe took over as chief two years ago, his department was drowning in crime and data. Police had a mass of data from 911 calls and crime reports; what they didn’t have was a way to connect the dots and see a pattern of behaviour.

Using some sophisticated software and hardware they started overlaying crime reports with other data, such as weather, traffic, sports events and paydays for large employers. The data was analyzed three times a day and something interesting emerged: Robberies spiked on paydays near cheque cashing storefronts in specific neighbourhoods. Other clusters also became apparent, and pretty soon police were deploying resources in advance and predicting where crime was most likely to occur.

Posted on August 10, 2007 at 6:51 AM35 Comments

The New U.S. Wiretapping Law and Security

Last week, Congress gave President Bush new wiretapping powers. I was going to write an essay on the security implications of this, but Susan Landau beat me to it:

To avoid wiretapping every communication, NSA will need to build massive automatic surveillance capabilities into telephone switches. Here things get tricky: Once such infrastructure is in place, others could use it to intercept communications.

Grant the NSA what it wants, and within 10 years the United States will be vulnerable to attacks from hackers across the globe, as well as the militaries of China, Russia and other nations.

Such threats are not theoretical. For almost a year beginning in April 2004, more than 100 phones belonging to members of the Greek government, including the prime minister and ministers of defense, foreign affairs, justice and public order, were spied on with wiretapping software that was misused. Exactly who placed the software and who did the listening remain unknown. But they were able to use software that was supposed to be used only with legal permission.

[…]

U.S. communications technology is fragile and easily penetrated. While advanced, it is not decades ahead of that of our friends or our rivals. Compounding the issue is a key facet of modern systems design: Intercept capabilities are likely to be managed remotely, and vulnerabilities are as likely to be global as local. In simplifying wiretapping for U.S. intelligence, we provide a target for foreign intelligence agencies and possibly rogue hackers. Break into one service, and you get broad access to U.S. communications.

More about the Greek wiretapping scandal. And I would be remiss if I didn’t mention the excellent book by Whitfield Diffie and Susan Landau on the subject: Privacy on the Line: The Politics of Wiretapping and Encryption.

Posted on August 9, 2007 at 3:29 PM46 Comments

New York Times Movie-Plot Threat Contest

My contest idea (first and second) has gone mainstream:

Hearing about these rules got me thinking about what I would do to maximize terror if I were a terrorist with limited resources. I’d start by thinking about what really inspires fear. One thing that scares people is the thought that they could be a victim of an attack. With that in mind, I’d want to do something that everybody thinks might be directed at them, even if the individual probability of harm is very low. Humans tend to overestimate small probabilities, so the fear generated by an act of terrorism is greatly disproportionate to the actual risk.

[…]

I’m sure many readers have far better ideas. I would love to hear them. Consider that posting them could be a form of public service: I presume that a lot more folks who oppose and fight terror read this blog than actual terrorists. So by getting these ideas out in the open, it gives terror fighters a chance to consider and plan for these scenarios before they occur.

Far more interesting than the suggested attacks are the commenters who accuse him of helping the terrorists. Not that I’m surprised; there were people who accused me of helping the terrorists.

But while it’s one thing for this kind of thing to happen in my blog, it’s another for it to happen in a mainstream blog on The New York Times website.

EDITED TO ADD (8/9): Sadly, he had to explain himself.

Posted on August 9, 2007 at 12:48 PM29 Comments

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AM37 Comments

Gun-Shaped Laptop Battery

Seems like bad design:

My laptop bag has scared TSA security personnel at several airports recently, requiring manual bag inspections each time. And when it happened again this week I finally figured out what it is that was freaking them out when the bag went through the x-ray machine—it’s the spare laptop battery I always carry. This would never be an issue if the battery were inside the laptop, but the spare battery (depending on how it is laying in the back) can catch attention. But, TSA issues aside, look at the shape of the battery. You just have to wonder—what on earth was IBM thinking?

The answer, of course, is obvious: it never occured to them.

Posted on August 8, 2007 at 2:12 PM31 Comments

Another Biometric: Vein Patterns

Interesting:

In fact, vein recognition technology has one fundamental advantage over finger print systems: vein patterns in fingers and palms are biometric characteristics that are not left behind unintentionally in every-day activities. In tests conducted by heise, even extreme close-ups of a palm taken with a digital camera, whose RAW format can be filtered systematically to emphasize the near-infrared range, were unable to deliver a clear reproduction of the line pattern. With the transluminance method used by Hitachi it is practically impossible to read out the pattern unnoticed with today’s technology. Another side effect of near-infrared imaging also has relevance to security: vein patterns of inanimate bodily parts become useless after few minutes, due to the increasing deoxidisation of the tissue.

Even if someone manages to obtain a person’s vein pattern, there is no known method for creating a functioning dummy, as is the case for finger prints, where this can be achieved even with home-made tools, as demonstrated by the german computer magazine c’t. As in the case with vendors of finger print systems, Hitachi and Fujitsu do not disclose information on liveness detection methods used in their products.

Besides the considerably improved forgery protection, the vendors of vein recognition technology claim further advantages. Compared to finger print sensors, vein recognition systems are said to deliver false rejection rates (FRR) two orders below that of finger print systems when operating at a comparable false acceptance rate (FAR). This can be ascribed to the basic structure of vein patterns having a much higher degree of variability than finger prints.

This is all interesting. I don’t know about the details of the technology, but the discussions of false positives, false negatives, and forgeability are the right ones to have. Remember, though, that while biometrics are an effective security technology, they’re not a panacea.

Posted on August 8, 2007 at 7:02 AM32 Comments

Asking for Passwords

How do you get a password out of an IRS agent? Just ask:

Sixty-one of the 102 people who got the test calls, including managers and a contractor, complied with a request that the employee provide his or her user name and temporarily change his or her password to one the caller suggested, according to the Treasury Inspector General for Tax Administration, an office that does oversight of Internal Revenue Service.

Wow. At the very least, I would have expected to have to give them chocolate.

Posted on August 7, 2007 at 6:53 AM34 Comments

Details on the UK Liquid Terrorist Plot

U.S. Homeland Security Secretary Michael Chertoff is releasing details about last summer’s liquid-bomb plot:

Sources tell ABC News that after studying the plot, government officials have concluded that without the tip to British authorities, the suspects could have likely smuggled the bomb components onboard using sports drinks.

The components of that explosives mixture can be bought at any drugstore or supermarket; however, there is some question whether the potential terrorists would have had the skill to properly mix and detonate their explosive cocktails in-flight.

But they can work—scientists at Sandia National Laboratory conducted a test using the formula, and when a small amount of liquid in a container was hit with a tiny burst of electrical current, a large explosion followed. (Click on the video player on the right side of this page to view the video.)

The test results were reviewed today by ABC terrorism consultant Richard Clarke, who said that while frequent travelers are upset by the current limits on liquids in carry-on baggage, “when they see this film, they ought to know it’s worth going through those problems.”

There has been a lot of speculation since last year about the plausibility of the plot, with most chemists falling on the “unrealistic” side.

I’m still skeptical, especially because the liquid ban doesn’t actually ban liquids. If they’re so dangerous, why can anyone take 12 ounces of any liquid on any plane at any time? That’s the real question, which TSA Administrator Kip Hawley deftly didn’t answer in my conversation with him last week. (I brought it on a plane again yesterday: an opaque 12-ounce bottle labeled “saline,” emptied and filled with another liquid, and then resealed. I held it up to the TSA official and made sure it was okay. It was.)

Another quote:

One official who briefed ABC News said explosives and security experts who examined the plot were “stunned at the extent that the suspects had gamed the system to exploit its weaknesses.”

“There’s no question that they had given a lot of thought to how they might smuggle containers with liquid explosives onto airplanes,” Chertoff said. “Without getting into things that are still classified, they obviously paid attention to the ways in which they thought they might be able to disguise these explosives as very innocent types of everyday articles.”

Well, yeah. That’s the game you’re stuck playing. From my conversation with Hawley (that’s me talking):

But you’re playing a game you can’t win. You ban guns and bombs, so the terrorists use box cutters. You ban small blades and knitting needles, and they hide explosives in their shoes. You screen shoes, so they invent a liquid explosive. You restrict liquids, and they’re going to do something else. The terrorists are going to look at what you’re confiscating, and they’re going to design a plot to bypass your security.

Stop focusing on the tactics; focus on the broad threats.

Posted on August 6, 2007 at 11:34 PM71 Comments

Security-Theater Cameras Coming to New York

In this otherwise lopsided article about security cameras, this one quote stands out:

But Steve Swain, who served for years with the London Metropolitan Police and its counter-terror operations, doubts the power of cameras to deter crime.

“I don’t know of a single incident where CCTV has actually been used to spot, apprehend or detain offenders in the act,” he said, referring to the London system. Swain now works for Control Risk, an international security firm.

Asked about their role in possibly stopping acts of terror, he said pointedly: “The presence of CCTV is irrelevant for those who want to sacrifice their lives to carry out a terrorist act.”

[…]

Swain does believe the cameras have great value in investigation work. He also said they are necessary to reassure the public that law enforcement is being aggressive.

“You need to do this piece of theater so that if the terrorists are looking at you, they can see that you’ve got some measures in place,” he said.

Did you get that? Swain doesn’t believe that cameras deter crime, but he wants cities to spend millions on them so that the terrorists “can see that you’ve got some measures in place.”

Anyone have any idea why we’re better off doing this than other things that may actually deter crime and terrorism?

Posted on August 6, 2007 at 3:23 PM37 Comments

British Report on E-Voting

In even more voting news, the UK Electoral Commission released a report on the 2007 e-voting and e-counting pilots. The results are none too good:

The Commission’s criticism of e-counting and e-voting was scathing; concerning the latter saying that the “security risk involved was significant and unacceptable.” They recommend against further trials until the problems identified are resolved. Quality assurance and planning were found to be inadequate, predominantly stemming from insufficient timescales. In the case of the six e-counting trials, three were abandoned, two were delayed, leaving only one that could be classed as a success. Poor transparency and value for money are also cited as problems. More worryingly, the Commission identify a failure to learn from the lessons of previous pilot programmes.

Posted on August 6, 2007 at 10:21 AM8 Comments

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure—the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AM42 Comments

More on the California Voting Machine Review

This is a follow-on to this post. What’s new is that the source code reviews are now available.

I haven’t had the chance to review the reports. Matt Blaze has a good summary on his blog:

We found significant, deeply-rooted security weaknesses in all three vendors’ software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being “unrealistic”. It should now be clear that the red teams were successful not because they somehow “cheated,” but rather because the built-in security mechanisms they were up against simply don’t work properly. Reliably protecting these systems under operational conditions will likely be very hard.

I just read Matt Bishop’s description of the miserable schedule and support that the California Secretary of State’s office gave to the voting-machine review effort:

The major problem with this study is time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of States said that under no circumstandes would it be extended.

[…]

The second problem was lack of information. In particular, various documents did not become available until July 13, too late to be of any value to the red teams, and the red teams did not have several security-related documents. Further, some software that would have materially helped the study was never made available.

Matt Blaze, who led the team that reviewed the Sequoia code, had similar things to say:

Reviewing that much code in less than two months was, to say the least, a huge undertaking. We spent our first week (while we were waiting for the code to arrive) setting up infrastructure, including a Trac Wiki on the internal network that proved invaluable for keeping everyone up to speed as we dug deeper and deeper into the system. By the end of the project, we were literally working around the clock.

It seems that we have a new problem to worry about: the Secretary of State has no clue how to get a decent security review done. Perversely, it was good luck that the voting machines tested were so horribly bad that the reviewers found vulnerabilities despite a ridiculous schedule—one month simply isn’t reasonable—and egregious foot-dragging by vendors in providing needed materials.

Next time, we might not be so lucky. If one vendor sees he can avoid embarrassment by stalling delivery of his most vulnerable source code for four weeks, we might end up with the Secretary of State declaring that the system survived vigorous testing and therefore is secure. Given that refusing cooperation incurred no penalty in this series of tests, we can expect vendors to work that angle more energetically in the future.

The Secretary of State’s own web page gives top billing to the need “to restore the public’s confidence in the integrity of the electoral process,” while the actual security of the machines is relegated to second place.

We need real security evaluations, not feel-good fake tests. I wish this were more the former than the latter.

EDITED TO ADD (8/4): California Secretary of State Bowen’s certification decisions are online.

She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.)

To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.

Another article.

EDITED TO ADD (8/4): The Diebold software is pretty bad.

EDITED TO ADD (8/5): Ed Felten comments:

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

And Avi Rubin comments:

As I read the three new reports, I could not help but marvel at the fact that so many places in the US are using these machines. When it comes to prescription medications, we perform extensive tests before drugs hit the market. When it comes to aviation, planes are held to standards and tested before people fly on them. But, it seems that the voting machines we are using are even more poorly designed and poorly implemented than I had realized.

He’s right, of course.

Posted on August 3, 2007 at 12:55 PM37 Comments

Conversation with Kip Hawley, TSA Administrator (Part 5)

This is Part 5 of a five-part series. Link to whole thing.

BS: So far, we’ve only talked about passengers. What about airport workers? Nearly one million workers move in and out of airports every day without ever being screened. The JFK plot, as laughably unrealistic as it was, highlighted the security risks of airport workers. As with any security problem, we need to secure the weak links, rather than make already strong links stronger. What about airport employees, delivery vehicles, and so on?

KH: I totally agree with your point about a strong base level of security everywhere and not creating large gaps by over-focusing on one area. This is especially true with airport employees. We do background checks on all airport employees who have access to the sterile area. These employees are in the same places doing the same jobs day after day, so when someone does something out of the ordinary, it immediately stands out. They serve as an additional set of eyes and ears throughout the airport.

Even so, we should do more on airport employees and my House testimony of April 19 gives details of where we’re heading. The main point is that everything you need for an attack is already inside the perimeter of an airport. For example, why take lighters from people who work with blowtorches in facilities with millions of gallons of jet fuel?

You could perhaps feel better by setting up employee checkpoints at entry points, but you’d hassle a lot of people at great cost with minimal additional benefit, and a smart, patient terrorist could find a way to beat you. Today’s random, unpredictable screenings that can and do occur everywhere, all the time (including delivery vehicles, etc.) are harder to defeat. With the latter, you make it impossible to engineer an attack; with the former, you give the blueprint for exactly that.

BS: There’s another reason to screen pilots and flight attendants: they go through the same security lines as passengers. People have to remember that it’s not pilots being screened, it’s people dressed as pilots. You either have to implement a system to verify that people dressed as pilots are actual pilots, or just screen everybody. The latter choice is far easier.

I want to ask you about general philosophy. Basically, there are three broad ways of defending airplanes: preventing bad people from getting on them (ID checks), preventing bad objects from getting on them (passenger screening, baggage screening), and preventing bad things from happening on them (reinforcing the cockpit door, sky marshals). The first one seems to be a complete failure, the second one is spotty at best. I’ve always been a fan of the third. Any future developments in that area?

KH: You are too eager to discount the first—stopping bad people from getting on planes. That is the most effective! Don’t forget about all the intel work done partnering with other countries to stop plots before they get here (UK liquids, NY subway), all the work done to keep them out either through no-flys (at least several times a month) or by Customs & Border Protection on their way in, and law enforcement once they are here (Ft. Dix). Then, you add the behavior observation (both uniformed and not) and identity validation (as we take that on) and that’s all before they get to the checkpoint.

The screening-for-things part, we’ve discussed, so I’ll jump to in-air measures. Reinforced, locked cockpit doors and air marshals are indeed huge upgrades since 9/11. Along the same lines, you have to consider the role of the engaged flight crew and passengers—they are quick to give a heads-up about suspicious behavior and they can, and do, take decisive action when threatened. Also, there are thousands of flights covered by pilots who are qualified as law enforcement and are armed, as well as the agents from other government entities like the Secret Service and FBI who provide coverage as well. There is also a fair amount of communications with the flight deck during flights if anything comes up en route—either in the aircraft or if we get information that would be of interest to them. That allows “quiet” diversions or other preventive measures. Training is, of course, important too. Pilots need to know what to do in the event of a missile sighting or other event, and need to know what we are going to do in different situations. Other things coming: better air-to-ground communications for air marshals and flight information, including, possibly, video.

So, when you boil it down, keeping the bomb off the plane is the number one priority. A terrorist has to know that once that door closes, he or she is locked into a confined space with dozens, if not hundreds, of zero-tolerance people, some of whom may be armed with firearms, not to mention the memory of United Flight 93.

BS: I’ve read repeated calls to privatize airport security: to return it to the way it was pre-9/11. Personally, I think it’s a bad idea, but I’d like your opinion on the question. And regardless of what you think should happen, do you think it will happen?

KH: From an operational security point of view, I think it works both ways. So it is not a strategic issue for me.

SFO, our largest private airport, has excellent security and is on a par with its federalized counterparts (in fact, I am on a flight from there as I write this). One current federalized advantage is that we can surge resources around the system with no notice; essentially, the ability to move from anywhere to anywhere and mix TSOs with federal air marshals in different force packages. We would need to be sure we don’t lose that interchangeability if we were to expand privatized screening.

I don’t see a major security or economic driver that would push us to large-scale privatization. Economically, the current cost-plus model makes it a better deal for the government in smaller airports than in bigger. So, maybe more small airports will privatize. If Congress requires collective bargaining for our TSOs, that will impose an additional overhead cost of about $500 million, which would shift the economic balance significantly toward privatized screening. But unless that happens, I don’t see major change in this area.

BS: Last question. I regularly criticize overly specific security measures, because forcing the terrorists to make minor modifications in their tactics doesn’t make us any safer. We’ve talked about specific airline threats, but what about airplanes as a specific threat? On the one hand, if we secure our airlines and the terrorists all decide instead to bomb shopping malls, we haven’t improved our security very much. On the other hand, airplanes make particularly attractive targets for several reasons. One, they’re considered national symbols. Two, they’re a common and important travel vehicle, and are deeply embedded throughout our economy. Three, they travel to distant places where the terrorists are. And four, the failure mode is severe: a small bomb drops the plane out of the sky and kills everyone. I don’t expect you to give back any of your budget, but when do we have “enough” airplane security as compared with the rest of our nation’s infrastructure?

KH: Airplanes are a high-profile target for terrorists for all the reasons you cited. The reason we have the focus we do on aviation is because of the effect the airline system has on our country, both economically and psychologically. We do considerable work (through grants and voluntary agreements) to ensure the safety of surface transportation, but it’s less visible to the public because people other than ones in TSA uniforms are taking care of that responsibility.

We look at the aviation system as one component in a much larger network that also includes freight rail, mass transit, highways, etc. And that’s just in the U.S. Then you add the world’s transportation sectors—it’s all about the network.

The only components that require specific security measures are the critical points of failure—and they have to be protected at virtually any cost. It doesn’t matter which individual part of the network is attacked—what matters is that the network as a whole is resilient enough to operate even with losing one or more components.

The network approach allows various transportation modes to benefit from our layers of security. Take our first layer: intel. It is fundamental to our security program to catch terrorists long before they get to their target, and even better if we catch them before they get into our country. Our intel operation works closely with other international and domestic agencies, and that information and analysis benefits all transportation modes.

Dogs have proven very successful at detecting explosives. They work in airports and they work in mass transit venues as well. As we test and pilot technologies like millimeter wave in airports, we assess their viability in other transportation modes, and vice versa.

To get back to your question, we’re not at the point where we can say “enough” for aviation security. But we’re also aware of the attractiveness of other modes and continue to use the network to share resources and lessons learned.

BS: Thank you very much for your time. I appreciate both your time and your candor.

KH: I enjoyed the exchange and appreciated your insights. Thanks for the opportunity.

Posted on August 3, 2007 at 6:12 AM52 Comments

Security Hole at Phoenix Airport

The news:

We’ve discovered a 4.5 hour time frame each night when virtually anything can be brought into the secure side of Phoenix Sky Harbor Airport. There’s no metal detector, no X-ray machine, and it’s apparently not a problem.

Afraid to show her face, one long time Sky Harbor employee talks about the security most people don’t see.

Lisa Fletcher: “You’re telling me Sky Harbor’s not safe?”

Employee: “I’m telling you Sky Harbor’s not safe and hasn’t been for a long time.”

It’s what we discovered in the middle of the night—TSA agents going away, and security guards taking over. It’s 4.5 hours—every night—when an employee badge becomes an all-access pass.

I have mixed feelings about this story. On the one hand, it’s a big security hole that not everyone knew was there. On the other hand, airport employees are allowed to bring stuff in and out of airports without screening all the time. So yes, the airports aren’t secure—but they never have been, so what’s the big deal?

The real issue here is that people don’t understand that an airport is a complex system and that securing it means more than passenger screening.

Posted on August 2, 2007 at 11:35 AM15 Comments

Conversation with Kip Hawley, TSA Administrator (Part 4)

This is Part 4 of a five-part series. Link to whole thing.

BS: What about Registered Traveler? When TSA first started talking about the program, the plan was to divide people into two categories: more trusted people who get less screening, and less trusted people who get more screening. This opened an enormous security hole; whenever you create an easy way and a hard way through security, you invite the bad guys to take the easier way. Since then, it’s transformed into a way for people to pay for better screening equipment and faster processing—a great idea with no security downsides. Given that, why bother with the background checks at all? What else is it besides a way for a potential terrorist to spend $60 and find out if the government is on to them?

KH: Registered Traveler (RT) is a promising program but suffers from unrealistic expectations. The idea—that you and I aren’t really risks and we should be screened less so that TSA can apply scarce resources on the more likely terrorist—makes sense and got branded as RT. The problem is that with two million people a day, how can we tell them apart in an effective way? We know terrorists use people who are not on watch lists and who don’t have criminal convictions, so we can’t use those criteria alone. Right now, I’ve said that RT is behind Secure Flight in priority and that TSA is open to working with private sector entities to facilitate RT, but we will not fund it, reduce overall security, or inconvenience regular travelers. As private companies deploy extra security above what TSA does, we can change the screening process accordingly. It has to be more than a front-of-the-line pass, and I think there are some innovations coming out in the year ahead that will better define what RT can become.

BS: Let’s talk about behavioral profiling. I’ve long thought that most of airline security could be ditched in favor of well-trained guards, both in and out of uniform, wandering the crowds looking for suspicious behavior. Can you talk about some of the things you’re doing along those lines, and especially ways to prevent this from turning into just another form of racial profiling?

KH: Moving security out from behind the checkpoint is a big priority for us. First, it gives us the opportunity to pick up a threat a lot earlier. Taking away weapons or explosives at the checkpoint is stopping the plot at nearly the last possible moment. Obviously, a good security system aims at stopping attacks well before that. That’s why we have many layers of security (intel, law enforcement, behavior detection, etc.) to get to that person well before the security checkpoint. When a threat gets to the checkpoint, we’re operating on his/her terms—they pick when, where, and how they present themselves to us. We want to pick up the cues on our terms, before they’re ready, even if they’re just at the surveillance stage.

We use a system of behavior observation that is based on the science that demonstrates that there are certain involuntary, subconscious actions that can betray a person’s hostile intent. For instance, there are tiny—but noticeable to the trained person—movements in a person’s facial muscles when they have certain emotions. It is very different from the stress we all show when we’re anxious about missing the flight due to, say, a long security line. This is true across race, gender, age, ethnicity, etc. It is our way of not falling into the trap where we predict what a terrorist is going to look like. We know they use people who “look like” terrorists, but they also use people who do not, perhaps thinking that we cue only off of what the 9/11 hijackers looked like.

Our Behavior Detection teams routinely—and quietly—identify problem people just through observable behavior cues. More than 150 people have been identified by our teams, turned over to law enforcement, and subsequently arrested. This layer is invisible to the public, but don’t discount it, because it may be the most effective. We publicize non-terrorist-related successes like a murder suspect caught in Minneapolis and a bank robber caught in Philadelphia.

Most common are people showing phony documents, but we have even picked out undercover operatives—including our own. One individual, identified by a TSO in late May and not allowed to fly, was killed in a police shoot-out five days later. Additionally, several individuals have been of interest from the counter-terrorism perspective. With just this limited deployment of Behavior Detection Officers (BDOs), we have identified more people of counterterrorism interest than all the people combined caught with prohibited items. Look for us to continue to look at ways that highlight problem people rather than just problem objects.

BS: That’s really good news, and I think it’s the most promising new security measure you’ve got. Although, honestly, bragging about capturing a guy for wearing a fake military uniform just makes you look silly.

Part 5: Keeping the bomb off the plane

Posted on August 2, 2007 at 6:12 AM69 Comments

More on Smell Samples

Earlier this month, I blogged about a library of people’s smells kept by the former East German police. Seems that the current German police is still doing it:

The Stasi secret police used scent gathering in Communist East Germany, collecting smells in empty jam jars and storing them. The method has reminded Germans of that failed regime of snoopers, and was highlighted in the recent Oscar-winning film “The Lives of Others” about a Stasi surveillance officer.

The domestic policy spokesman for the Social Democrat Party, Dieter Wiefelspütz, finds the new weapon “pretty bizarre.” But he knows that unappetising though it may be, the method has been employed by German investigators for a long time.

In legal terms, recording someone’s body odour is no different than taking their finger prints. It’s covered by the criminal statue book. The scent contains a person’s identity just like the lines of his finger tips or his DNA.

Taking someone’s DNA is subject to strict conditions but the law permits finger printing and scent recording whenever police deem it necessary as part of a criminal investigation—which means virtually always. Erhard Denninger, an expert on Germany’s justice system, has no problem with scent analysis. “It’s harmless by comparison with sledgehammer plans like searching people’s computers,” he said.

Suspects are told to hold several 10 centimeter steel pipes in succession for several minutes each.

There are strict rules governing this procedure. The interior minister of the state of North Rhine-Westphalia has decreed that “persons must contaminate the metal tubes through their hands”, and that the aromatic traces thereby recorded “be secured in glass containers in dry condition.”

It sounds harmless. But a number of defence lawyers, Düsseldorf-based Udo Vetter among them, advise their clients not to agree to scent recording. If the state sniffs the sweat of its citizens, it amounts to a “considerable intrusion into one’s intimate sphere,” he says.

The complexity of collecting someone’s scent is the theme of Patrick Süskind’s novel “Perfume”, recently made into a movie, in which an 18th century murderer wraps beautiful women in cloths which he later boils. Unlike in real life, the perfume specialist chose to kill his victims before taking their scent.

Posted on August 1, 2007 at 2:05 PM12 Comments

Movie-Plot Threats in Second Life

Oh, give me a break:

On the darker side, there are also weapons armouries in SL where people can get access to guns, including automatic weapons and AK47s. Searches of the SL website show there are three jihadi terrorists registered and two elite jihadist terrorist groups.

Once these groups take up residence in SL, it is easy to start spreading propaganda, recruiting and instructing like minds on how to start terrorist cells and carry out jihad.

One radical group, called Second Life Liberation Army, has been responsible for some computer-coded atomic bombings of virtual world stores in the past six months.

On screen these blasts look like an explosion of hazy white balls as buildings explode, landscapes are razed and residents are wounded or killed.

With the game taking such a sinister turn, terrorism experts are warning that SL attacks have ramifications for the real world. Just as September 11 terrorists practised flying planes on simulators in preparation for their deadly assault on US buildings, law enforcement agencies believe some of those behind the Second Life attacks are home-grown Australian jihadists who are rehearsing for strikes against real targets.

Geez. Do we all need to take our shoes off before logging in or something? Refuse to be terrorized, people.

Another discussion.

EDITED TO ADD (8/2): Another article.

Posted on August 1, 2007 at 11:49 AM37 Comments

Conversation with Kip Hawley, TSA Administrator (Part 3)

This is Part 3 of a five-part series. Link to whole thing.

BS: Let’s talk about ID checks. I’ve called the no-fly list a list of people so dangerous they cannot be allowed to fly under any circumstance, yet so innocent we can’t arrest them even under the Patriot Act. Except that’s not even true; anyone, no matter how dangerous they are, can fly without an ID ­or by using someone else’s boarding pass. And the list itself is filled with people who shouldn’t be on it—dead people, people in jail, and so on—and primarily catches innocents with similar names. Why are you bothering?

KH: Because it works. We just completed a scrub of every name on the no-fly list and cut it in half—essentially cleaning out people who were no longer an active terror threat. We do not publicize how often the no-fly system stops people you would not want on your flight. Several times a week would low-ball it.

Your point about the no-ID and false boarding pass people is a great one. We are moving people who have tools and training to get at that problem. The bigger issue is that TSA is moving in the direction of security that picks up on behavior versus just keying on what we see in your bag. It really would be security theater if all we did was try to find possible weapons in that crunched fifteen seconds and fifteen feet after you anonymously walk through the magnetometer. We do a better job, with less aggravation of ordinary passengers, if we put people-based layers further ahead in the process—behavior observation based on involuntary, observable muscle behavior, canine teams, document verification, etc.

BS: We’ll talk about behavioral profiling later; no fair defending one security measure by pointing to another, completely separate, one. How can you claim ID cards work? Like the liquid ban, all it does is annoy innocent travelers without doing more than inconveniencing any future terrorists. Is it really good enough for you to defend me from terrorists too dumb to Google “print your own boarding pass”?

KH: We are getting at the fake boarding pass and ID issues with our proposal to Congress that would allow us to replace existing document checkers with more highly trained people with tools that would close those gaps. Without effective identity verification, watch lists don’t do much, so this is a top priority.

Having highly trained TSOs performing the document checking function closes a security gap, adds another security layer, and pushes TSA’s security program out in front of the checkpoint.

BS: Let’s move on. Air travelers think you’re capricious. Remember in April when the story went around about the Princeton professor being on a no-fly list because he spoke out against President Bush? His claims were easily debunked, but the real story is that so many people believed it. People believe political activity puts them on the list. People are afraid to complain about being mistreated at checkpoints because they’re afraid it puts them on a list. Is there anything you can do to make this process more transparent?

KH: We need some help on this one. This is the biggest public pain point, dwarfing shoes and baggies.

First off, TSA does not add people to the watch-lists, no matter how cranky you are at a checkpoint. Second, political views have nothing to do with no-flys or selectees. These myths have taken on urban legend status. There are very strict criteria and they are reviewed by lots of separate people in separate agencies: it is for live terror concerns only. The problem comes from random selectees (literally mathematically random) or people who have the same name and birth date as real no-flys. If you can get a boarding pass, you are not on the no-fly list. This problem will go away when Secure Flight starts in 2008, but we can’t seem to shake the false impression that ordinary Americans get put on a “list.” I am open for suggestions on how to make the public “get it.”

BS: It’s hard to believe that there could be hundreds of thousands of people meeting those very strict criteria, and that’s after the list was cut in half! I know the TSA does not control the no-fly and watch lists, but you’re the public face of those lists. You’re the aspect of homeland security that people come into direct contact with. Some people might find out they’re on the list by being arrested, or being shipped off to Syria for torture, but most people find out they’re on the list by being repeatedly searched and questioned for hours at airports.

The main problem with the list is that it’s secret. Who is on the list is secret. Why someone’s on is secret. How someone can get off is secret. There’s no accountability and there’s no transparency. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states.

The best thing you can do to improve the problem is redress. People need the ability to see the evidence against them, challenge their accuser, and have a hearing in a neutral court. If they’re guilty of something, arrest them. And if they’re innocent, stop harassing them. It’s basic liberty.

I don’t actually expect you to fix this; the problem is larger than the TSA. But can you tell us something about redress? It’s been promised to us for years now.

KH: Redress issues are divided into two categories: people on the no-fly list and people who have names similar to them.

In our experience, the first group is not a heavy user of the redress process. They typically don’t want anything to do with the U.S. government. Still, if someone is either wrongly put on or kept on, the Terrorist Screening Center (TSC) removes him or her immediately. In fact, TSA worked with the TSC to review every name, and that review cut the no-fly list in half. Having said that, once someone is really on the no-fly list, I totally agree with what you said about appeal rights. This is true across the board, not just with no-flys. DHS has recently consolidated redress for all DHS activities into one process called DHS TRIP. If you are mistaken for a real no-fly, you can let TSA know and we provide your information to the airlines, who right now are responsible for identifying no-flys trying to fly. Each airline uses its own system, so some can get you cleared to use kiosks, while others still require a visit to the ticket agent. When Secure Flight is operating, we’ll take that in-house at TSA and the problem should go away.

BS: I still don’t see how that will work, as long as the TSA doesn’t have control over who gets on or off the list.

Part 4: Registered Traveler and behavioral profiling

Posted on August 1, 2007 at 6:12 AM92 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.