Blog: November 2010 Archives

The Constitutionality of Full-Body Scanners

Jeffrey Rosen opines:

Although the Supreme Court hasn’t evaluated airport screening technology, lower courts have emphasized, as the U.S. Court of Appeals for the 9th Circuit ruled in 2007, that “a particular airport security screening search is constitutionally reasonable provided that it ‘is no more extensive nor intensive than necessary, in the light of current technology, to detect the presence of weapons or explosives.'”

In a 2006 opinion for the U.S. Court of Appeals for the 3rd Circuit, then-Judge Samuel Alito stressed that screening procedures must be both “minimally intrusive” and “effective” – in other words, they must be “well-tailored to protect personal privacy,” and they must deliver on their promise of discovering serious threats. Alito upheld the practices at an airport checkpoint where passengers were first screened with walk-through magnetometers and then, if they set off an alarm, with hand-held wands. He wrote that airport searches are reasonable if they escalate “in invasiveness only after a lower level of screening disclose[s] a reason to conduct a more probing search.”

As currently used in U.S. airports, the new full-body scanners fail all of Alito’s tests.

In other news, The New York Times wrote an editorial in favor of the scanners. I was surprised.

Posted on November 30, 2010 at 12:09 PM48 Comments

Mohamed Osman Mohamud

I agree with Glenn Greenwald. I don’t know if it’s an actual terrorist that the FBI arrested, or if it’s another case of entrapment.

All of the information about this episode—all of it—comes exclusively from an FBI affidavit filed in connection with a Criminal Complaint against Mohamud. As shocking and upsetting as this may be to some, FBI claims are sometimes one-sided, unreliable and even untrue, especially when such claims—as here—are uncorroborated and unexamined.

This, although old, is relevant. So is this, although even older:

The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn’t have ordinarily done. The Miami gang’s Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.

In any case, notice that it was old-fashioned police investigation that caught this guy.

EDITED TO ADD (12/13): Another analysis.

Posted on November 30, 2010 at 5:54 AM45 Comments

Zoo Security

From a study on zoo security:

Among other measures, the scientists recommend not allowing animals to walk freely within the zoo grounds, and ensuring there is a physical barrier marking the zoo boundaries, and preventing individuals from escaping through drains, sewers or any other channels.

Isn’t all that sort of obvious?

Posted on November 29, 2010 at 12:32 PM43 Comments

Causing Terror on the Cheap

Total cost for the Yemeni printer cartridge bomb plot: $4200.

“Two Nokia mobiles, $150 each, two HP printers, $300 each, plus shipping, transportation and other miscellaneous expenses add up to a total bill of $4,200. That is all what Operation Hemorrhage cost us,” the magazine said.

Even if you add in costs for training, recruiting, logistics, and everything else, that’s still remarkably cheap. And think of how many times that we spent in security in the aftermath.

As it turns out, this is bin Laden’s plan:

In his October 2004 address to the American people, bin Laden noted that the 9/11 attacks cost al Qaeda only a fraction of the damage inflicted upon the United States. “Al Qaeda spent $500,000 on the event,” he said, “while America in the incident and its aftermath lost—according to the lowest estimates—more than $500 billion, meaning that every dollar of al Qaeda defeated a million dollars.”

The economic strategy of jihad would go through refinement. Its initial phase linked terrorist attacks broadly to economic harm. A second identifiable phase, which al Qaeda pursued even as it continued to attack economic targets, is what you might call its “bleed-until-bankruptcy plan.” Bin Laden announced this plan in October 2004, in the same video in which he boasted of the economic harm inflicted by 9/11. Terrorist attacks are often designed to provoke an overreaction from the opponent and this phase seeks to embroil the United States and its allies in draining wars in the Muslim world. The mujahideen “bled Russia for 10 years, until it went bankrupt,” bin Laden said, and they would now do the same to the United States.

[…]

The point is clear: Security is expensive, and driving up costs is one way jihadists can wear down Western economies. The writer encourages the United States “not to spare millions of dollars to protect these targets” by increasing the number of guards, searching all who enter those places, and even preventing flying objects from approaching the targets. “Tell them that the life of the American citizen is in danger and that his life is more significant than billions of dollars,” he wrote. “Hand in hand, we will be with you until you are bankrupt and your economy collapses.”

None of this would work if we don’t help them by terrorizing ourselves. I wrote this after the Underwear Bomber failed:

Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We’re reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.

If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.

Posted on November 29, 2010 at 6:52 AM61 Comments

Friday Squid Blogging: Studying Squid Hearing

At Woods Hole:

It is known now, through the work of Mooney and others, that the squid hearing system has some similarities and some differences compared to human hearing. Squid have a pair of organs called statocysts, balance mechanisms at the base of the brain that contain a tiny grain of calcium, which maintains its position as the animal maneuvers in the water. These serve a function similar to human ear canals.

Each statocyst is a hollow, fluid-filled sac lined with hair cells, like human cochlea. On the outside of the sac, the hair cells are connected to nerves, which lead to the brain. “It’s kind of like an inside-out tennis ball,” Mooney said, “hairy on the inside, smooth on the outside.”

The calcium grain, called a statolith, enables the squid to sense its position in the water, based on which hair cells it’s in contact with at a given moment. Normally it rests near the front of the sac, touching some of the hair cells.

Another article.

Posted on November 26, 2010 at 4:58 PM2 Comments

Psychopaths and Security

I have been thinking a lot about security against psychopaths. Or, at least, how we have traditionally secured social systems against these sorts of people, and how we can secure our socio-technical systems against them. I don’t know if I have any conclusions yet, only a short reading list.

EDITED TO ADD (12/12): Good article from 2001. The sociobiology of sociopathy. Psychopathic fraudsters and how they function in bureaucracies.

Posted on November 26, 2010 at 1:52 PM52 Comments

The DHS is Getting Rid of the Color-Coded Terrorism Alert System

Good. It was always a dumb idea:

The color-coded threat levels were doomed to fail because “they don’t tell people what they can do—­ they just make people afraid,” said Bruce Schneier, an author on security issues. He said the system was “a relic of our panic after 9/11” that “never served any security purpose.”

I wrote this in 2004:

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans, as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

Another alert system to compare this one to is the DEFCON system. At each DEFCON level, there are specific actions people have to take: at one DEFCON level—and I’m making this up—you call everyone back from leave, at another you fuel all the bombers, at another you arm the bombs, and so on. What actions am I supposed to take when the terrorist threat level is Yellow? When it is Orange? I have no idea.

EDITED TO ADD (11/25): Good observation:

The DHS National Threat Advisory is a public alert system. That a public alert system is indicating imminent disaster is not surprising. In fact it’s inevitable. It’s the nature of public alert systems to signal imminent disaster at all times. I’ve composed “Blakley’s Law” (next time I come up with one of these I’ll rename this one “Blakley’s First Law”) to describe the phenomenon:

“Every public alert system’s status indicator rises until it reaches its disaster imminent setting and remains at that setting until it is retired from service.”

It’s easy to see why Blakley’s law holds: if something terrible happens and the alert status didn’t predict it, the keepers of the alert status will be blamed for not preparing us for the disaster. Setting the alert status to “Disaster imminent” when no disaster is likely costs the public some money and mental health, but it doesn’t hurt them in other ways. On the other hand, setting the alert status to “Don’t worry, be happy” just before a disaster does happen is the worst case for everyone – nobody prepares for the disaster, and the people in power lose their jobs for failing to prevent or prepare for the crisis.

Posted on November 25, 2010 at 6:39 AM43 Comments

New ATM Skimming Attack

In Europe, although the article doesn’t say where:

Many banks have fitted ATMs with devices that are designed to thwart criminals from attaching skimmers to the machines. But it now appears in some areas that those devices are being successfully removed and then modified for skimming, according to the latest report from the European ATM Security Team (EAST), which collects data on ATM fraud throughout Europe.

Posted on November 24, 2010 at 1:33 PM31 Comments

Me on Airport Security

Yesterday I participated in a New York Times “Room for Debate” discussion on airline security. My contribution is nothing I haven’t said before, so I won’t reprint it here.

A short history of airport security: We screen for guns and bombs, so the terrorists use box cutters. We confiscate box cutters and corkscrews, so they put explosives in their sneakers. We screen footwear, so they try to use liquids. We confiscate liquids, so they put PETN bombs in their underwear. We roll out full-body scanners, even though they wouldn’t have caught the Underwear Bomber, so they put a bomb in a printer cartridge. We ban printer cartridges over 16 ounces—the level of magical thinking here is amazing—and they’re going to do something else.

This is a stupid game, and we should stop playing it.

The other participants are worth reading, too.

I also did an interview in—of all places—Popular Mechanics.

Posted on November 23, 2010 at 6:11 AM68 Comments

Defeating al Qaeda

Rare common sense:

But Gen Richards told the BBC it was not possible to defeat the Taliban or al-Qaeda militarily.

“You can’t. We’ve all said this. David Petraeus has said it, I’ve said it.

“The trick is the balance of things that you’re doing and I say that the military are just about, you know, there.

“The biggest problem’s been ensuring that the governance and all the development side can keep up with it within a time frame and these things take generations sometimes within a time frame that is acceptable to domestic, public and political opinion,” he said.

[…]

Shadow defence secretary Jim Murphy told the BBC Gen Richards was “right” that there was no purely military solution and said there would be “no white flag surrender moment”.

“This is a complicated issue. It will be for the long haul. It’s got to do with history.

“But I think he’s right to talk about the different ways that this has got to be taken on – militarily yes but diplomatically and in a peaceful sense of nation building in Afghanistan is also important,” he said.

Posted on November 22, 2010 at 1:08 PM47 Comments

Stuxnet News

Another piece of the puzzle:

New research, published late last week, has established that Stuxnet searches for frequency converter drives made by Fararo Paya of Iran and Vacon of Finland. In addition, Stuxnet is only interested in frequency converter drives that operate at very high speeds, between 807 Hz and 1210 Hz.

The malware is designed to change the output frequencies of drives, and therefore the speed of associated motors, for short intervals over periods of months. This would effectively sabotage the operation of infected devices while creating intermittent problems that are that much harder to diagnose.

Low-harmonic frequency converter drives that operate at over 600 Hz are regulated for export in the US by the Nuclear Regulatory Commission as they can be used for uranium enrichment. They may have other applications but would certainly not be needed to run a conveyor belt at a factory, for example.

The threat of Stuxnet variants is being used to scare senators.

Me on Stuxnet.

Posted on November 22, 2010 at 6:19 AM41 Comments

TSA Backscatter X-ray Backlash

Things are happening so fast that I don’t know if I should bother. But here are some links and observations.

The head of the Allied Pilots Association is telling its members to avoid both the full body scanners and the patdowns.

This first-hand report, from a man who refused to fly rather than subject himself to a full-body scan or an enhanced patdown, has been making the rounds. (The TSA is now investigating him.) It reminds me of Penn Jillette’s story from 2002.

A woman has a horrific story of opting-out of the full body scanners. More stories: this one about the TSA patting down a screaming toddler. And here’s Dave Barry’s encounter (also this NPR interview).

Sadly, I agree with this:

It is no accident that women have been complaining about being pulled out of line because of their big breasts, having their bodies commented on by TSA officials, and getting inappropriate touching when selected for pat-downs for nearly 10 years now, but just this week it went viral. It is no accident that CAIR identified Islamic head scarves (hijab) as an automatic trigger for extra screenings in January, but just this week it went viral. What was different?

Suddenly an able-bodied white man is the one who was complaining.

Seems that once you enter airport security, you need to be subjected to it—whether you decide to fly or not.

I experienced the enhanced patdown myself, at DCA, on Tuesday. It was invasive, but not as bad as these stories. It seems clear that TSA agents are inconsistent about these procedures. They’ve probably all had the same training, but individual agents put it into practice very differently.

Of course, airport security is an extra-Constitutional area, so there’s no clear redress mechanism for those subjected to too-intimate patdowns.

This video provides tips to parents flying with young children. Around 2:50 in, the reporter indicates that you can find out if your child has been pre-selected for secondary, and then recommends requesting “de-selection.” That doesn’t make sense.

Neither does this story, which says that the TSA will only touch Muslim women in the head and neck area.

Nor this story. The author convinces people on line to opt-out with him. After the first four opt-outs, the TSA just sent people through the metal detectors.

Yesterday, the TSA administrator John Pistole was grilled by the Senate Commerce, Science, and Transportation Committee on full-body scanners. Rep. Ron Paul introduced a bill to ban them. (His floor speech is here.) I’m one of the plaintiffs in a lawsuit to ban them.

Book for kids: My First Cavity Search. Cover seen at at TSA checkpoint.

T-shirts: one, two, and three and four. “Comply with Me” song parody. Political cartoons: one, two, three, and four. New TSA logo. Best TSA tweets, including “It’s not a grope. It’s a freedom pat.”

Good essay from a libertarian perspective. Two more. Marc Rotenberg’s essay. Ralph Nader’s essay. And the Los Angeles Times really screws up with this editorial: “Shut Up and Be Scanned.” Amitai Etzioni makes a better case for the machines.

Michael Chertoff, former Department of Homeland Security secretary, has been touting the full-body scanners, while at the same time maintaining a financial interest in the company that makes them.

There’s talk about the health risks of the machines, but I can’t believe you won’t get more radiation on the flight. Here’s some data:

A typical dental X-ray exposes the patient to about 2 millirems of radiation. According to one widely cited estimate, exposing each of 10,000 people to one rem (that is, 1,000 millirems) of radiation will likely lead to 8 excess cancer deaths. Using our assumption of linearity, that means that exposure to the 2 millirems of a typical dental X-ray would lead an individual to have an increased risk of dying from cancer of 16 hundred-thousandths of one percent. Given that very small risk, it is easy to see why most rational people would choose to undergo dental X-rays every few years to protect their teeth.

More importantly for our purposes, assuming that the radiation in a backscatter X-ray is about a hundredth the dose of a dental X-ray, we find that a backscatter X-ray increases the odds of dying from cancer by about 16 ten millionths of one percent. That suggests that for every billion passengers screened with backscatter radiation, about 16 will die from cancer as a result.

Given that there will be 600 million airplane passengers per year, that makes the machines deadlier than the terrorists.

Nate Silver on the hidden cost of these new airport security measures.

According to the Cornell study, roughly 130 inconvenienced travelers died every three months as a result of additional traffic fatalities brought on by substituting ground transit for air transit. That’s the equivalent of four fully-loaded Boeing 737s crashing each year.

Jeffrey Goldberg asked me which I would rather see for children: backscatter X-ray or enhanced pat down. After remarking what an icky choice it was, I opted for the X-ray; it’s less traumatic.

Here are a bunch of leaked body scans. They’re not from airports, but they should make you think twice before accepting the TSA’s assurances that the images will never be saved. RateMyBackscatter.com.

November 24 is National Opt Out Day. Doing this just before the Thanksgiving holiday is sure to clog up airports. Jeffrey Goldberg suggests that men wear kilts, commando style if possible.

At least one airport is opting out of the TSA entirely. I hadn’t known you could do that.

The New York Times on the protests.

Common sense from the Netherlands:

The security boss of Amsterdam’s Schiphol Airport is calling for an end to endless investment in new technology to improve airline security.

Marijn Ornstein said: “If you look at all the recent terrorist incidents, the bombs were detected because of human intelligence not because of screening … If even a fraction of what is spent on screening was invested in the intelligence services we would take a real step toward making air travel safer and more pleasant.”

And here’s Rafi Sela, former chief security officer of the Israel Airport Authority:

A leading Israeli airport security expert says the Canadian government has wasted millions of dollars to install “useless” imaging machines at airports across the country.

“I don’t know why everybody is running to buy these expensive and useless machines. I can overcome the body scanners with enough explosives to bring down a Boeing 747,” Rafi Sela told parliamentarians probing the state of aviation safety in Canada.

“That’s why we haven’t put them in our airport,” Sela said, referring to Tel Aviv’s Ben Gurion International Airport, which has some of the toughest security in the world.

They can be fooled by creased clothing. And remember this German video?

I’m quoted in the Los Angeles Times:

Some experts argue the new procedures could make passengers uncomfortable without providing a substantial increase in security. “Security measures that just force the bad guys to change tactics and targets are a waste of money,” said Bruce Schneier, a security expert who works for British Telecom. “It would be better to put that money into investigations and intelligence.”

I’m quoted in The Wall Street Journal twice—once as saying:

“All these machines require you to guess the plot correctly. If you don’t, then they are completely worthless,” said Bruce Schneier, a security expert.

Mr. Schneier and some other experts argue that assembling better intelligence on fliers is the key to making travel safer.

and once as saying:

Security guru Bruce Schneier, a plaintiff in the scanner suit, calls this “magical thinking . . . Descend on what the terrorists happened to do last time, and we’ll all be safe. As if they won’t think of something else.”

In 2005, I wrote:

I’m not impressed with this security trade-off. Yes, backscatter X-ray machines might be able to detect things that conventional screening might miss. But I already think we’re spending too much effort screening airplane passengers at the expense of screening luggage and airport employees…to say nothing of the money we should be spending on non-airport security.

On the other side, these machines are expensive and the technology is incredibly intrusive. I don’t think that people should be subjected to strip searches before they board airplanes. And I believe that most people would be appalled by the prospect of security screeners seeing them naked.

I believe that there will be a groundswell of popular opposition to this idea. Aside from the usual list of pro-privacy and pro-liberty groups, I expect fundamentalist Christian groups to be appalled by this technology. I think we can get a bevy of supermodels to speak out against the invasiveness of the search.

On the other hand, CBS News is reporting that 81% of Americans support full-body scans. Maybe they should only ask flying Americans.

I still stand by this, also from 2005:

Exactly two things have made airline travel safer since 9/11: reinforcement of cockpit doors, and passengers who now know that they may have to fight back. Everything else—Secure Flight and Trusted Traveler included—is security theater. We would all be a lot safer if, instead, we implemented enhanced baggage security—both ensuring that a passenger’s bags don’t fly unless he does, and explosives screening for all baggage—as well as background checks and increased screening for airport employees.

Then we could take all the money we save and apply it to intelligence, investigation and emergency response. These are security measures that pay dividends regardless of what the terrorists are planning next, whether it’s the movie plot threat of the moment, or something entirely different.

And this, written in 2010 after the Underwear Bomber failed:

Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We’re reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.

If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.

See these two essays of mine as well, from the same time.

More resources on the EPIC pages.

What else is going on?

EDITED TO ADD: (11/19): Lots more political cartoons.

Good summary of your legal rights and options from the ACLU. They also have a form you can fill out and send to your Congresscritter.

This has to win for DHS Quote of the Year, from Secretary Janet Napolitano on the issue:

I really want to say, look, let’s be realistic and use our common sense.

The TSA doesn’t train its screeners very well. A response to a letter-writer from Sen. Coburn. From Slate: "Does the TSA Ever Catch Terrorists?" A pilot’s story. The screeners’ point of view. Good essay from the National Post.

Fun with the Playmobil airline security screening playset.

Meg McLain, whose horrific story I linked to above, lied. Here’s an interview with her.

EDITED TO ADD (11/20): I was interviewed by Popular Mechanics.

Woman forced to remove prosthetic breast. TSO officer caught saying “heads up, got a cutie for you” into his headset to the other officers. Complication news video of TSA behavior.

Here’s an alert you can hand out to passengers at security checkpoints where there are backscatter machines.

EDITED TO ADD (11/21): Me in an Associated Press piece on the anti-TSA backlash:

“After 9/11 people were scared and when people are scared they’ll do anything for someone who will make them less scared,” said Bruce Schneier, a Minneapolis security technology expert who has long been critical of the TSA. “But … this is particularly invasive. It’s strip-searching. It’s body groping. As abhorrent goes, this pegs it.”

President Obama comments:

“I understand people’s frustrations, and what I’ve said to the TSA is that you have to constantly refine and measure whether what we’re doing is the only way to assure the American people’s safety. And you also have to think through are there other ways of doing it that are less intrusive,” Obama said.

“But at this point, TSA in consultation with counterterrorism experts have indicated to me that the procedures that they have been putting in place are the only ones right now that they consider to be effective against the kind of threat that we saw in the Christmas Day bombing.”

TSA sendup on Saturday Night Live yesterday.

EDITED TO ADD (11/22): The thing about Muslim women being exempt seems to be based on a misreading of this press release. What they seem to be saying is that if you’re selected because you could have something under your hijab, then they only need to just pat down the area the hijab covers. It’s not a special exemption.

TSA Administrator John Pistole comments:

We are constantly evaluating and adapting our security measures, and as we have said from the beginning, we are seeking to strike the right balance between privacy and security. In all such security programs, especially those that are applied nation-wide, there is a continual process of refinement and adjustment to ensure that best practices are applied and that feedback and comment from the traveling public is taken into account.

EDITED TO ADD (11/23): Fantastic infographic. Excellent poster image. This, too. And another political cartoon.

Yesterday I participated in a New York Times “Room for Debate” discussion on airline security. My contribution is nothing I haven’t said before, so I won’t reprint it here. The other participants are worth reading too.

More from Nate Silver, on public opinion and the likely TSA reaction:

It is perhaps foolish to predict how the T.S.A. will respond this time—when they have relaxed rules in the past, they have done so quietly, rather than in response to some acute public backlash. But caution aside, I would be surprised if the new procedures survived much past the New Year without significant modification.

CNN’s advice to the public.

Things are definitely strained out there:

Through a statement released by his attorney Sunday night, Wolanyk said “TSA needs to see that I’m not carrying any weapons, explosives, or other prohibited substances, I refuse to have images of my naked body viewed by perfect strangers, and having been felt up for the first time by TSA the week prior (I travel frequently) I was not willing to be molested again.”

Wolanyk’s attorney said that TSA requested his client put his clothes on so he could be patted down properly but his client refused to put his clothes back on. He never refused a pat down, according to his attorney. Wolanyk was arrested for refusing to complete the security process.

From the same article:

A woman, identified by Harbor police as Danielle Kelli Hayman,39, of San Diego was detained for recording the incident on a phone.

That’s much more worrying.

Interview with Brian Michael Jenkins, a senior advisor at the RAND Corp. and a former member of the White House Commission on Aviation Safety and Security.

Here’s someone who managed to avoid both the full-body scanners and the enhanced pat down. It took him two and a half hours. And here someone who got patted down, and managed to sneak two razor blades through security anyway.

How the TSA will deal with people with disabilities. How the pat downs affect survivors of sexual assault. (Read also the comments here.) Juan Cole on how airport security has shifted from looking for people with guns and traditional bombs to looking for people with PETN. And TSA-proof underwear.

EDITED TO ADD (11/24): Information on the health risks of the backscatter machines. And here’s a woman who stripped down to her underwear before going through airport security. This comes from a perspective I generally don’t buy, but it’s hard to dismiss his writing. I don’t think it’s a conspiracy, but I do think it’s a trend. “This Modern World” has a comic on the topic. Slate on the lack of guidelines. Why the TSA should be privatized.

EDITED TO ADD (11/25): I was on Keith Olbermann last night.

Posted on November 19, 2010 at 5:37 AM312 Comments

Airplane Terrorism Twenty Years Ago

Excellent:

Here’s a scenario:

Middle Eastern terrorists hijack a U.S. jetliner bound for Italy. A two-week drama ensues in which the plane’s occupants are split into groups and held hostage in secret locations in Lebanon and Syria.

While this drama is unfolding, another group of terrorists detonates a bomb in the luggage hold of a 747 over the North Atlantic, killing more than 300 people.

Not long afterward, terrorists kill 19 people and wound more than a hundred others in coordinated attacks at European airport ticket counters.

A few months later, a U.S. airliner is bombed over Greece, killing four passengers.

Five months after that, another U.S. airliner is stormed by heavily armed terrorists at the airport in Karachi, Pakistan, killing at least 20 people and wounding 150 more.

Things are quiet for a while, until two years later when a 747 bound for New York is blown up over Europe killing 270 passengers and crew.

Nine months from then, a French airliner en route to Paris is bombed over Africa, killing 170 people from 17 countries.

That’s a pretty macabre fantasy, no? A worst-case war-game scenario for the CIA? A script for the End Times? Except, of course, that everything above actually happened, in a four-year span between 1985 and 1989.

Refuse to be terrorized, everyone.

Posted on November 18, 2010 at 12:19 PM49 Comments

Unsolicited Terrorism Tips to the U.S. Government

Adding them all up, the U.S. government “receives between 8,000 and 10,000 pieces of information per day, fingering just as many different people as potential threats. They also get information about 40 supposed plots against the United States or its allies daily.”

All of this means that first-time suspects and isolated pieces of information are less likely to be exhaustively investigated. That’s what happened with underwear bomber Umar Farouk Abdulmutallab. Intelligence agencies had heard that a Nigerian was training with al-Qaeda, received information about a Christmas plot, and read a couple of intercepts about someone named Umar Farouk (no last name) before Abdulmutallab’s father walked into a U.S. embassy to report him. No one ever figured out that these seemingly unrelated pieces of intelligence referred to the same plot, so intelligence agencies didn’t pour enough resources into investigating it.

As I wrote in 2007, in my essay: “The War on the Unexpected”:

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

Posted on November 18, 2010 at 6:13 AM25 Comments

Term Paper Writing for Hire

This recent essay (commentary here) reminded me of this older essay, both by people who write student term papers for hire.

There are several services that do automatic plagiarism detection—basically, comparing phrases from the paper with general writings on the Internet and even caches of previously written papers—but detecting this kind of custom plagiarism work is much harder.

I can think of three ways to deal with this:

  1. Require all writing to be done in person, and proctored. Obviously this won’t work for larger pieces of writing like theses.
  2. Semantic analysis in an attempt to fingerprint writing styles. It’s by no means perfect, but it is possible to detect if a piece of writing looks nothing like a student’s normal writing style.
  3. In-person quizzes on the writing. If a professor sits down with the student and asks detailed questions about the writing, he can pretty quickly determine if the student understand what he claims to have written.

The real issue is proof. Most colleges and universities are unwilling to pursue this without solid proof—the lawsuit risk is just too great—and in these cases the only real proof is self-incrimination.

Fundamentally, this is a problem of misplaced economic incentives. As long as the academic credential is worth more to a student than the knowledge gained in getting that credential, there will be an incentive to cheat.

Related note: anyone remember my personal experience with plagiarism from 2005?

Posted on November 16, 2010 at 6:36 AM83 Comments

Internet Quarantines

Last month, Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users’ computers so they could rejoin the greater Internet.

This isn’t a new idea. Already there are products that test computers trying to join private networks, and only allow them access if their security patches are up-to-date and their antivirus software certifies them as clean. Computers denied access are sometimes shunned to a limited-capability sub-network where all they can do is download and install the updates they need to regain access. This sort of system has been used with great success at universities and end-user-device-friendly corporate networks. They’re happy to let you log in with any device you want—this is the consumerization trend in action—as long as your security is up to snuff.

Charney’s idea is to do that on a larger scale. To implement it we have to deal with two problems. There’s the technical problem—making the quarantine work in the face of malware designed to evade it, and the social problem—ensuring that people don’t have their computers unduly quarantined. Understanding the problems requires us to understand quarantines in general.

Quarantines have been used to contain disease for millennia. In general several things need to be true for them to work. One, the thing being quarantined needs to be easily recognized. It’s easier to quarantine a disease if it has obvious physical characteristics: fever, boils, etc. If there aren’t any obvious physical effects, or if those effects don’t show up while the disease is contagious, a quarantine is much less effective.

Similarly, it’s easier to quarantine an infected computer if that infection is detectable. As Charney points out, his plan is only effective against worms and viruses that our security products recognize, not against those that are new and still undetectable.

Two, the separation has to be effective. The leper colonies on Molokai and Spinalonga both worked because it was hard for the quarantined to leave. Quarantined medieval cities worked less well because it was too easy to leave, or—when the diseases spread via rats or mosquitoes—because the quarantine was targeted at the wrong thing.

Computer quarantines have been generally effective because the users whose computers are being quarantined aren’t sophisticated enough to break out of the quarantine, and find it easier to update their software and rejoin the network legitimately.

Three, only a small section of the population must need to be quarantined. The solution works only if it’s a minority of the population that’s affected, either with physical diseases or computer diseases. If most people are infected, overall infection rates aren’t going to be slowed much by quarantining. Similarly, a quarantine that tries to isolate most of the Internet simply won’t work.

Fourth, the benefits must outweigh the costs. Medical quarantines are expensive to maintain, especially if people are being quarantined against their will. Determining who to quarantine is either expensive (if it’s done correctly) or arbitrary, authoritative and abuse-prone (if it’s done badly). It could even be both. The value to society must be worth it.

It’s the last point that Charney and others emphasize. If Internet worms were only damaging to the infected, we wouldn’t need a societally imposed quarantine like this. But they’re damaging to everyone else on the Internet, spreading and infecting others. At the same time, we can implement systems that quarantine cheaply. The value to society far outweighs the cost.

That makes sense, but once you move quarantines from isolated private networks to the general Internet, the nature of the threat changes. Imagine an intelligent and malicious infectious disease: That’s what malware is. The current crop of malware ignores quarantines; they’re few and far enough between not to affect their effectiveness.

If we tried to implement Internet-wide—or even countrywide—quarantining, worm-writers would start building in ways to break the quarantine. So instead of nontechnical users not bothering to break quarantines because they don’t know how, we’d have technically sophisticated virus-writers trying to break quarantines. Implementing the quarantine at the ISP level would help, and if the ISP monitored computer behavior, not just specific virus signatures, it would be somewhat effective even in the face of evasion tactics. But evasion would be possible, and we’d be stuck in another computer security arms race. This isn’t a reason to dismiss the proposal outright, but it is something we need to think about when weighing its potential effectiveness.

Additionally, there’s the problem of who gets to decide which computers to quarantine. It’s easy on a corporate or university network: the owners of the network get to decide. But the Internet doesn’t have that sort of hierarchical control, and denying people access without due process is fraught with danger. What are the appeal mechanisms? The audit mechanisms? Charney proposes that ISPs administer the quarantines, but there would have to be some central authority that decided what degree of infection would be sufficient to impose the quarantine. Although this is being presented as a wholly technical solution, it’s these social and political ramifications that are the most difficult to determine and the easiest to abuse.

Once we implement a mechanism for quarantining infected computers, we create the possibility of quarantining them in all sorts of other circumstances. Should we quarantine computers that don’t have their patches up to date, even if they’re uninfected? Might there be a legitimate reason for someone to avoid patching his computer? Should the government be able to quarantine someone for something he said in a chat room, or a series of search queries he made? I’m sure we don’t think it should, but what if that chat and those queries revolved around terrorism? Where’s the line?

Microsoft would certainly like to quarantine any computers it feels are not running legal copies of its operating system or applications software.The music and movie industry will want to quarantine anyone it decides is downloading or sharing pirated media files—they’re already pushing similar proposals.

A security measure designed to keep malicious worms from spreading over the Internet can quickly become an enforcement tool for corporate business models. Charney addresses the need to limit this kind of function creep, but I don’t think it will be easy to prevent; it’s an enforcement mechanism just begging to be used.

Once you start thinking about implementation of quarantine, all sorts of other social issues emerge. What do we do about people who need the Internet? Maybe VoIP is their only phone service. Maybe they have an Internet-enabled medical device. Maybe their business requires the Internet to run. The effects of quarantining these people would be considerable, even potentially life-threatening. Again, where’s the line?

What do we do if people feel they are quarantined unjustly? Or if they are using nonstandard software unfamiliar to the ISP? Is there an appeals process? Who administers it? Surely not a for-profit company.

Public health is the right way to look at this problem. This conversation—between the rights of the individual and the rights of society—is a valid one to have, and this solution is a good possibility to consider.

There are some applicable parallels. We require drivers to be licensed and cars to be inspected not because we worry about the danger of unlicensed drivers and uninspected cars to themselves, but because we worry about their danger to other drivers and pedestrians. The small number of parents who don’t vaccinate their kids have already caused minor outbreaks of whooping cough and measles among the greater population. We all suffer when someone on the Internet allows his computer to get infected. How we balance that with individuals’ rights to maintain their own computers as they see fit is a discussion we need to start having.

This essay previously appeared on Forbes.com.

EDITED TO ADD (11/15): From an anonymous reader:

In your article you mention that for quarantines to work, you must be able to detect infected individuals. It must also be detectable quickly, before the individual has the opportunity to infect many others. Quarantining an individual after they’ve infected most of the people they regularly interact with is of little value. You must quarantine individuals when they have infected, on average, less than one other person.

Just as worm-writers would respond to the technical mechanisms to implement a quarantine by investing in ways to get around them, they would also likely invest in outpacing the quarantine. If a worm is designed to spread fast, even the best quarantine mechanisms may be unable to keep up.

Another concern with quarantining mechanisms is the damage that attackers could do if they were able to compromise the mechanism itself. This is of especially great concern if the mechanism were to include code within end-user’s TCBs to scan computers­ essentially a built-in root kit. Without a scanner in the end-user’s TCB, it’s hard to see how you could reliably detect infections.

Posted on November 15, 2010 at 4:55 AM74 Comments

Camouflaging Test Cars

Interesting:

In an effort to shield their still-secret products from prying eyes, automakers testing prototype models, often in the desert and at other remote locales, have long covered the grilles and headlamps with rubber, vinyl and tape ­ the perfunctory equivalent of masks and hats. Now the old materials are being replaced or supplemented with patterned wrappings applied like wallpaper. Test cars are wearing swirling paisley patterns, harlequin-style diamonds and cubist zigzags.

Posted on November 12, 2010 at 6:28 AM34 Comments

Bulletproof Service Providers

From Brian Krebs:

Hacked and malicious sites designed to steal data from unsuspecting users via malware and phishing are a dime a dozen, often located in the United States, and are a key target for takedown by ISPs and security researchers. But when online miscreants seek stability in their Web projects, they often turn to so-called “bulletproof hosting” providers, mini-ISPs that specialize in offering services that are largely immune from takedown requests and pressure from Western law enforcement agencies.

Posted on November 11, 2010 at 12:45 PM19 Comments

Changing Passwords

How often should you change your password? I get asked that question a lot, usually by people annoyed at their employer’s or bank’s password expiration policy: people who finally memorized their current password and are realizing they’ll have to write down their new password. How could that possibly be more secure, they want to know.

The answer depends on what the password is used for.

The downside of changing passwords is that it makes them harder to remember. And if you force people to change their passwords regularly, they’re more likely to choose easy-to-remember—and easy-to-guess—passwords than they are if they can use the same passwords for many years. So any password-changing policy needs to be chosen with that consideration in mind.

The primary reason to give an authentication credential—not just a password, but any authentication credential—an expiration date is to limit the amount of time a lost, stolen, or forged credential can be used by someone else. If a membership card expires after a year, then if someone steals that card he can at most get a year’s worth of benefit out of it. After that, it’s useless.

This becomes less important when the credential contains a biometric—even a photograph—or is verified online. It’s much less important for a credit card or passport to have an expiration date, now that they’re not so much bearer documents as just pointers to a database. If, for example, the credit card database knows when a card is no longer valid, there’s no reason to put an expiration date on the card. But the expiration date does mean that a forgery is only good for a limited length of time.

Passwords are no different. If a hacker gets your password either by guessing or stealing it, he can access your network as long as your password is valid. If you have to update your password every quarter, that significantly limits the utility of that password to the attacker.

At least, that’s the traditional theory. It assumes a passive attacker, one who will eavesdrop over time without alerting you that he’s there. In many cases today, though, that assumption no longer holds. An attacker who gets the password to your bank account by guessing or stealing it isn’t going to eavesdrop. He’s going to transfer money out of your account—and then you’re going to notice. In this case, it doesn’t make a lot of sense to change your password regularly—but it’s vital to change it immediately after the fraud occurs.

Someone committing espionage in a private network is more likely to be stealthy. But he’s also not likely to rely on the user credential he guessed and stole; he’s going to install backdoor access or create his own account. Here again, forcing network users to regularly change their passwords is less important than forcing everyone to change their passwords immediately after the spy is detected and removed—you don’t want him getting in again.

Social networking sites are somewhere in the middle. Most of the criminal attacks against Facebook users use the accounts for fraud. “Help! I’m in London and my wallet was stolen. Please wire money to this account. Thank you.” Changing passwords periodically doesn’t help against this attack, although – of course – change your password as soon as you regain control of your account. But if your kid sister has your password—or the tabloid press, if you’re that kind of celebrity—they’re going to listen in until you change it. And you might not find out about it for months.

So in general: you don’t need to regularly change the password to your computer or online financial accounts (including the accounts at retail sites); definitely not for low-security accounts. You should change your corporate login password occasionally, and you need to take a good hard look at your friends, relatives, and paparazzi before deciding how often to change your Facebook password. But if you break up with someone you’ve shared a computer with, change them all.

Two final points. One, this advice is for login passwords. There’s no reason to change any password that is a key to an encrypted file. Just keep the same password as long as you keep the file, unless you suspect it’s been compromised. And two, it’s far more important to choose a good password for the sites that matter—don’t worry about sites you don’t care about that nonetheless demand that you register and choose a password—in the first place than it is to change it. So if you have to worry about something, worry about that. And write your passwords down, or use a program like Password Safe.

This essay originally appeared on DarkReading.com.

EDITED TO ADD (11/14): Microsoft Research says the same thing.

The Security of Modern Password Expiration: An Algorithmic Framework and Empirical Analysis.”

Posted on November 11, 2010 at 6:45 AM95 Comments

Securing the Washington Monument

Good article on security options for the Washington Monument:

Unfortunately, the bureaucratic gears are already grinding, and what will be presented to the public Monday doesn’t include important options, including what became known as the “tunnel” in previous discussions of the issue. Nor does it include the choice of more minimal visitor screening—simple wanding or visual bag inspection—that might not require costly and intrusive changes to the structure. The choice to accept risk isn’t on the table, either. Finally, and although it might seem paradoxical given how important resisting security authoritarianism is to preserving the symbolism of freedom, it doesn’t take seriously the idea that perhaps the monument’s interior should be closed altogether—a small concession that might have collateral benefits.

[…]

Closing the interior of the monument, the construction of which was suspended during the Civil War, would remind the public of the effect that fears engendered by the current war on terrorism have had on public space. Closing it as a symbolic act might initiate an overdue discussion about the loss of even more important public spaces, including the front entrance of the Supreme Court and the west terrace of the Capitol. It would be a dramatic reminder of the choices we as a nation have made, and perhaps an inspiration to change our ways in favor of a more open, risk-tolerant society that understands public space always has some element of danger.

EDITED TO ADD (11/15): More information on the decision process.

Posted on November 10, 2010 at 7:09 AM24 Comments

Crowdsourcing Surveillance

Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a “Viewer.” Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner. Viewers get rated on their ability to differentiate real shoplifting from false alarms, can win 1000 pounds if they detect the most shoplifting in some time interval, and otherwise get paid a wage that most likely won’t cover their initial fee.

Although the system has some nod towards privacy, groups like Privacy International oppose the system for fostering a culture of citizen spies. More fundamentally, though, I don’t think the system will work. Internet Eyes is primarily relying on voyeurism to compensate its Viewers. But most of what goes on in a retail store is incredibly boring. Some of it is actually voyeuristic, and very little of it is criminal. The incentives just aren’t there for Viewers to do more than peek, and there’s no obvious way to discouraging them from siding with the shoplifter and just watch the scenario unfold.

This isn’t the first time groups have tried to crowdsource surveillance camera monitoring. Texas’s Virtual Border Patrol tried the same thing: deputizing the general public to monitor the Texas-Mexico border. It ran out of money last year, and was widely criticized as a joke.

This system suffered the same problems as Internet Eyes—not enough incentive to do a good job, boredom because crime is the rare exception—as well as the fact that false alarms were very expensive to deal with.

Both of these systems remind me of the one time this idea was conceptualized correctly. Invented in 2003 by my friend and colleague Jay Walker, US HomeGuard also tried to crowdsource surveillance camera monitoring. But this system focused on one very specific security concern: people in no-mans areas. These are areas between fences at nuclear power plants or oil refineries, border zones, areas around dams and reservoirs, and so on: areas where there should never be anyone.

The idea is that people would register to become “spotters.” They would get paid a decent wage (that and patriotism was the incentive), receive a stream of still photos, and be asked a very simple question: “Is there a person or a vehicle in this picture?” If a spotter clicked “yes,” the photo—and the camera—would be referred to whatever professional response the camera owner had set up.

HomeGuard would monitor the monitors in two ways. One, by sending stored, known, photos to people regularly to verify that they were paying attention. And two, by sending live photos to multiple spotters and correlating the results, to many more monitors if a spotter claimed to have spotted a person or vehicle.

Just knowing that there’s a person or a vehicle in a no-mans area is only the first step in a useful response, and HomeGuard envisioned a bunch of enhancements to the rest of that system. Flagged photos could be sent to the digital phones of patrolling guards, cameras could be controlled remotely by those guards, and speakers in the cameras could issue warnings. Remote citizen spotters were only useful for that first step, looking for a person or a vehicle in a photo that shouldn’t contain any. Only real guards at the site itself could tell an intruder from the occasional maintenance person.

Of course the system isn’t perfect. A would-be infiltrator could sneak past the spotters by holding a bush in front of him, or disguising himself as a vending machine. But it does fill in a gap in what fully automated systems can do, at least until image processing and artificial intelligence get significantly better.

HomeGuard never got off the ground. There was never any good data about whether spotters were more effective than motion sensors as a first level of defense. But more importantly, Walker says that the politics surrounding homeland security money post-9/11 was just too great to penetrate, and that as an outsider he couldn’t get his ideas heard. Today, probably, the patriotic fervor that gripped so many people post-9/11 has dampened, and he’d probably have to pay his spotters more than he envisioned seven years ago. Still, I thought it was a clever idea then and I still think it’s a clever idea—and it’s an example of how to do surveillance crowdsourcing correctly.

Making the system more general runs into all sorts of problems. An amateur can spot a person or vehicle pretty easily, but is much harder pressed to notice a shoplifter. The privacy implications of showing random people pictures of no-mans lands is minimal, while a busy store is another matter—stores have enough individuality to be identifiable, as do people. Public photo tagging will even allow the process to be automated. And, of course, the normalization of a spy-on-your-neighbor surveillance society where it’s perfectly reasonable to watch each other on cameras just in case one of us does something wrong.

This essay first appeared in ThreatPost.

Posted on November 9, 2010 at 12:59 PM31 Comments

Kahn, Diffie, Clark, and Me at Bletchley Park

Saturday, I visited Bletchley Park to speak at the Annual ACCU Security Fundraising Conference. They had a stellar line of speakers this year, and I was pleased to be a part of the day.

Talk #1: “The Art of Forensic Warfare,” Andy Clark. Riffing on Sun Tzu’s The Art of War, Clark discussed the war—the back and forth—between cyber attackers and cyber forensics. This isn’t to say that we’re at war, but today’s attacker tactics are increasingly sophisticated and warlike. Additionally, the pace is greater, the scale of impact is greater, and the subjects of attack are broader. To defend ourselves, we need to be equally sophisticated and—possibly—more warlike.

Clark drew parallels from some of the chapters of Sun Tzu’s book combined with examples of the work at Bletchley Park. Laying plans: when faced with an attacker—especially one of unknown capabilities, tactics, and motives—it’s important to both plan ahead and plan for the unexpected. Attack by stratagem: increasingly, attackers are employing complex and long-term strategies; defenders need to do the same. Energy: attacks increasingly start off simple and get more complex over time; while it’s easier to defect primary attacks, secondary techniques tend to be more subtle and harder to detect. Terrain: modern attacks take place across a very broad range of terrain, including hardware, OSs, networks, communication protocols, and applications. The business environment under attack is another example of terrain, equally complex. The use of spies: not only human spies, but also keyloggers and other embedded eavesdropping malware. There’s a great World War II double-agent story about Eddie Chapman, codenamed ZIGZAG.

Talk #2: “How the Allies Suppressed the Second Greatest Secret of World War II,” David Kahn. This talk is from Kahn’s article of the same name, published in the Oct 2010 issue of The Journal of Military History. The greatest secret of World War II was the atom bomb; the second greatest secret was that the Allies were reading the German codes. But while there was a lot of public information in the years after World War II about Japanese codebreaking and its value, there was almost nothing about German codebreaking. Kahn discussed how this information was suppressed, and how historians writing World War II histories never figured it out. No one imagined as large and complex an operation as Bletchley Park; it was the first time in history that something like this had ever happened. Most of Kahn’s time was spent in a very interesting Q&A about the history of Bletchley Park and World War II codebreaking.

Talk #3: “DNSSec, A System for Improving Security of the Internet Domain Name System,” Whitfield Diffie. Whit talked about three watersheds in modern communications security. The first was the invention of the radio. Pre-radio, the most common communications security device was the code book. This was no longer enough when radio caused the amount of communications to explode. In response, inventors took the research in Vigenère ciphers and automated them. This automation led to an explosion of designs and an enormous increase in complexity—and the rise of modern cryptography.

The second watershed was shared computing. Before the 1960s, the security of computers was the physical security of computer rooms. Timesharing changed that. The result was computer security, a much harder problem than cryptography. Computer security is primarily the problem of writing good code. But writing good code is hard and expensive, so functional computer security is primarily the problem of dealing with code that isn’t good. Networking—and the Internet—isn’t just an expansion of computing capacity. The real difference is how cheap it is to set up communications connections. Setting up these connections requires naming: both IP addresses and domain names. Security, of course, is essential for this all to work; DNSSec is a critical part of that.

The third watershed is cloud computing, or whatever you want to call the general trend of outsourcing computation. Google is a good example. Every organization uses Google search all the time, which probably makes it the most valuable intelligence stream on the planet. How can you protect yourself? You can’t, just as you can’t whenever you hand over your data for storage or processing—you just have to trust your outsourcer. There are two solutions. The first is legal: an enforceable contract that protects you and your data. The second is technical, but mostly theoretical: homomorphic encryption that allows you to outsource computation of data without having to trust that outsourcer.

Diffie’s final point is that we’re entering an era of unprecedented surveillance possibilities. It doesn’t matter if people encrypt their communications, or if they encrypt their data in storage. As long as they have to give their data to other people for processing, it will be possible to eavesdrop on. Of course the methods will change, but the result will be an enormous trove of information about everybody.

Talk #4: “Reconceptualizing Security,” me. It was similar to this essay and this video.

Posted on November 9, 2010 at 6:01 AM24 Comments

Young Man in "Old Man" Mask Boards Plane in Hong Kong

It’s kind of an amazing story. A young Asian man used a rubber mask to disguise himself as an old Caucasian man and, with a passport photo that matched his disguise, got through all customs and airport security checks and onto a plane to Canada.

The fact that this sort of thing happens occasionally doesn’t surprise me. It’s human nature that we miss this sort of thing. I wrote about it in Beyond Fear (pages 153–4):

No matter how much training they get, airport screeners routinely miss guns and knives packed in carry-on luggage. In part, that’s the result of human beings having developed the evolutionary survival skill of pattern matching: the ability to pick out patterns from masses of random visual data. Is that a ripe fruit on that tree? Is that a lion stalking quietly through the grass? We are so good at this that we see patterns in anything, even if they’re not really there: faces in inkblots, images in clouds, and trends in graphs of random data. Generating false positives helped us stay alive; maybe that wasn’t a lion that your ancestor saw, but it was better to be safe than sorry. Unfortunately, that survival skill also has a failure mode. As talented as we are at detecting patterns in random data, we are equally terrible at detecting exceptions in uniform data. The quality-control inspector at Spacely Sprockets, staring at a production line filled with identical sprockets looking for the one that is different, can’t do it. The brain quickly concludes that all the sprockets are the same, so there’s no point paying attention. Each new sprocket confirms the pattern. By the time an anomalous sprocket rolls off the assembly line, the brain simply doesn’t notice it. This psychological problem has been identified in inspectors of all kinds; people can’t remain alert to rare events, so they slip by.

A customs officer spends hours looking at people and comparing their faces with their passport photos. They do it on autopilot. Will they catch someone in a rubber mask that looks like their passport photo? Probably, but certainly not all the time.

Yes, this is a security risk, but it’s not a big one. Because while—occasionally—a gun can slip through a metal detector or a masked man can slip through customs, it doesn’t happen reliably. So the bad guys can’t build a plot around it.

One last point: the young man in the old-man mask was captured by Canadian police. His fellow passengers noticed him. So in the end, his plot failed. Security didn’t fail, although a bunch of pieces of it did.

EDITED TO ADD (11/10): Comment (from below) about what actually happened.

Posted on November 8, 2010 at 2:55 PM38 Comments

The End of In-Flight Wi-Fi?

Okay, now the terrorists have really affected me personally: they’re forcing us to turn off airplane Wi-Fi. No, it’s not that the Yemeni package bombs had a Wi-Fi triggering mechanism—they seem to have had a cell phone triggering mechanism, dubious at best—but we can imagine an Internet-based triggering mechanism. Put together a sloppy and unsuccessful package bomb with an imagined triggering mechanism, and you have a new and dangerous threat that—even though it was a threat ever since the first airplane got Wi-Fi capability—must be immediately dealt with right now.

Please, let’s not ever tell the TSA about timers. Or altimeters.

And, while we’re talking about the TSA, be sure to opt out of the full-body scanners and remember your sense of humor when a TSA officer slips white powder into your suitcase and then threatens you with arrest.

EDITED TO ADD (11/8): We’re banning toner cartridges over 16 ounces.

Additionally, toner and ink cartridges that are over 16 ounces will be banned from all U.S. passenger flights and planes heading to the United States, she said. That ban will also apply to some air cargo shipments.

Other new rules include:

  • International mail packages sent to the U.S. must be screened individually and certified to have come from an established postal shipper;
  • Cargo shippers, such as UPS, Federal Express, and DHL, have been encouraged to report cargo manifests to Homeland Security faster, prior to departure, to aid in identifying risky cargo based on current intelligence.

There’s some impressive magical thinking going on here.

Posted on November 8, 2010 at 10:21 AM105 Comments

The Business of Botnets

It can be lucrative:

Avanesov allegedly rented and sold part of his botnet, a common business model for those who run the networks. Other cybercriminals can rent the hacked machines for a specific time for their own purposes, such as sending a spam run or mining the PCs for personal details and files, among other nefarious actions.

Dutch prosecutors believe that Avanesov made up to €100,000 ($139,000) a month from renting and selling his botnet just for spam, said Wim De Bruin, spokesman for the Public Prosecution Service in Rotterdam. Avanesov was able to sell parts of the botnet off “because it was very easy for him to extend the botnet again,” by infecting more PCs, he said.

EDITED TO ADD (11/11): Paper on the market price of bots.

Posted on November 4, 2010 at 7:04 AM45 Comments

Did the FBI Invent the D.C. Bomb Plot?

Last week the police arrested Farooque Ahmed for plotting a terrorist attack on the D.C. Metro system. However, it’s not clear how much of the plot was his idea and how much was the idea of some paid FBI informants:

The indictment offers some juicy tidbits—Ahmed allegedly proposed using rolling suitcases instead of backpacks to bomb the Metro—but it is notably thin in details about the role of the FBI. It is not clear, for example, whether Ahmed or the FBI (or some combination of the two) came up with the concept of bombing the Metro in the first place. And the indictment does not say when and why Ahmed first encountered the people he believed to be members of al-Qaida.

Of course the police are now using this fake bomb plot to justify random bag searching in the Metro. (It’s a dumb idea.)

This is the problem with thoughtcrime. Entrapment is much too easy.

EDITED TO ADD (11/4): Much the same thing was written in The Economist blog.

Posted on November 3, 2010 at 7:06 AM46 Comments

Dan Geer on "Cybersecurity and National Policy"

Worth reading:

Those with either an engineering or management background are aware that one cannot optimize everything at once ­ that requirements are balanced by constraints. I am not aware of another domain where this is as true as it is in cybersecurity and the question of a policy response to cyber insecurity at the national level. In engineering, this is said as “Fast, Cheap, Reliable: Choose Two.” In the public policy arena, we must first remember the definition of a free country: a place where that which is not forbidden is permitted. As we consider the pursuit of cybersecurity, we will return to that idea time and time again; I believe that we are now faced with “Freedom, Security, Convenience: Choose Two.”

Posted on November 2, 2010 at 5:51 AM35 Comments

Control Fraud

I had never heard the term “control fraud” before:

Control fraud theory was developed in the savings and loan debacle. It explained that the person controlling the S&L (typically the CEO) posed a unique risk because he could use it as a weapon.

The theory synthesized criminology (Wheeler and Rothman 1982), economics (Akerlof 1970), accounting, law, finance, and political science. It explained how a CEO optimized “his” S&L as a weapon to loot creditors and shareholders. The weapon of choice was accounting fraud. The company is the perpetrator and a victim. Control frauds are optimal looters because the CEO has four unique advantages. He uses his ability to hire and fire to suborn internal and external controls and make them allies. Control frauds consistently get “clean” opinions for financial statements that show record profitability when the company is insolvent and unprofitable. CEOs choose top-tier auditors. Their reputation helps deceive creditors and shareholders.

Only the CEO can optimize the company for fraud.

This is an interesting paper about control fraud. It’s by William K. Black, the Executive Director of the Institute for Fraud Prevention. “Individual ‘control frauds’ cause greater losses than all other forms of property crime combined. They are financial super-predators.” Black is talking about control fraud by both heads of corporations and heads of state, so that’s almost certainly a true statement. His main point, though, is that our legal systems don’t do enough to discourage control fraud.

White-collar criminology has a set of empirical findings and theories that are useful to understanding when markets will act perversely. This paper addresses three, interrelated theories economists should know about. “Control fraud” theory explains why the most damaging forms of fraud are situations in which those that control the company or the nation use it as a fraud vehicle. The CEO, or the head of state, poses the greatest fraud risk. A single large control fraud can cause greater financial losses than all other forms of property crime combined they are the “super-predators” of the financial world. Control frauds can also occur in waves that can cause systemic economic injury and discredit other institutions essential to good government and society. Control frauds are commonly able to defeat for several years market mechanisms that neo-classical economists predict will prevent such frauds.

“Systems capacity” theory examines why under deterrence is so common. It shows that, particularly with respect to elite crimes, anti-fraud resources and willpower are commonly so limited that “crime pays.” When systems capacity limitations are severe a “criminogenic environment” arises and crime increases. When a criminogenic environment for control fraud occurs it can produce a wave of control fraud.

“Neutralization” theory explores how criminals neutralize moral and social barriers that reduce crime by constraining our decision-making to honest enterprises. The easier individuals are able to neutralize such social restraints, the greater the incidence of crime.

[…]

White-collar criminology findings falsify several neo-classical economic theories. This paper discusses the predictive failures of the efficient markets hypothesis, the efficient contracts hypothesis and the law & economics theory of corporate law. The paper argues that neo-classical economists’ reliance on these flawed models leads them to recommend policies that optimize a criminogenic environment for control fraud. Fortunately, these policies are not routinely adopted in full. When they are, they produce recurrent crises because they eviscerate the institutions and mores vital to make markets and governments more efficient in preventing waves of control fraud. Criminological theories have demonstrated superior predictive and explanatory behavior with regard to perverse economic behavior. This paper discusses two realms of perverse behavior the role of waves of control fraud in producing economic crises and the role that endemic control fraud plays in producing economic stagnation.

EDITED TO ADD (11/11): Related paper on the effects of executive compensation on the abuse of controls.

Posted on November 1, 2010 at 6:02 AM47 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.