Blog: July 2007 Archives

More on the French Bank Hack

A year ago, I blogged about a bank hack at the center of a French national scandal.

Well, the case has taken an interesting turn. Law enforcement experts managed to retrieve incriminating evidence from the hard disk of senior intelligence General Rondot after about a year of work.

Wouldn’t we all like to know the technical details of both the data shredding and forensic technologies?

Posted on July 31, 2007 at 1:10 PM36 Comments

California Voting Machine Audit Results

The state of California conducted a security review of their electronic voting machines earlier this year. This was a serious review, with real security researchers getting access to the source code. The report was issued last week, and the researchers were able to compromise all three machines—by Diebold Election Systems, Hart Intercivic, and Sequoia Voting Systems—multiple ways. (They said they could probably find more ways, if they had more time.)

Final report and details about the audit here. Good blog entries here and here. We don’t know what California will do now.

This is no surprise, really. The notion that electronic voting machines were somehow more secure every other computer system ever built was ridiculous from the start. And the claims by machine manufacturers that releasing their source code would hurt the security of the machine was—like all these sorts of claims—really an attempt to prevent embarrassment to the company.

Not everyone gets this, unfortunately. And not everyone involved in voting:

Letting the hackers have the source codes, operating manuals and unlimited access to the voting machines “is like giving a burglar the keys to your house,” said Steve Weir, clerk-recorder of Contra Costa County and head of the state Association of Clerks and Election Officials.

No. It’s like giving burglars the schematics, installation manuals, and unlimited access to your front door lock. If your lock is good, it will survive the burglar having that information. If your lock isn’t good, the burglar will get in.

I have two essays on this, from 2004: “Why Election Technology is Hard,” and “Electronic Voting Machines.” This essay—”Voting and Technology“—was written in 2000.

EDITED TO ADD (7/31): Another article.

EDITED TO ADD (8/2): Good commentary.

Posted on July 31, 2007 at 10:57 AM47 Comments

Conversation with Kip Hawley, TSA Administrator (Part 2)

This is Part 2 of a five-part series. Link to whole thing.

BS: I hope you’re telling the truth; screening is a difficult problem, and it’s hard to discount all of those published tests and reports. But a lot of the security around these checkpoints is about perception—we want potential terrorists to think there’s a significant chance they won’t get through the checkpoints—so you’re better off maintaining that the screeners are better than reports indicate, even if they’re not.

Backscatter X-ray is another technology that is causing privacy concerns, since it basically allows you to see people naked. Can you explain the benefits of the technology, and what you are doing to protect privacy? Although the machines can distort the images, we know that they can store raw, unfiltered images; the manufacturer Rapiscan is quite proud of the fact. Are the machines you’re using routinely storing images? Can they store images at the screener’s discretion, or is that capability turned off at installation?

KH: We’re still evaluating backscatter and are in the process of running millimeter wave portals right alongside backscatter to compare their effectiveness and the privacy issues. We do not now store images for the test phase (function disabled), and although we haven’t officially resolved the issue, I fully understand the privacy argument and don’t assume that we will store them if and when they’re widely deployed.

BS: When can we keep our shoes on?

KH: Any time after you clear security. Sorry, Bruce, I don’t like it either, but this is not just something leftover from 2002. It is a real, current concern. We’re looking at shoe scanners and ways of using millimeter wave and/or backscatter to get there, but until the technology catches up to the risk, the shoes have to go in the bin.

BS: This feels so much like “cover your ass” security: you’re screening our shoes because everyone knows Richard Reid hid explosives in them, and you’ll be raked over the coals if that particular plot ever happens again. But there are literally thousands of possible plots.

So when does it end? The terrorists invented a particular tactic, and you’re defending against it. But you’re playing a game you can’t win. You ban guns and bombs, so the terrorists use box cutters. You ban small blades and knitting needles, and they hide explosives in their shoes. You screen shoes, so they invent a liquid explosive. You restrict liquids, and they’re going to do something else. The terrorists are going to look at what you’re confiscating, and they’re going to design a plot to bypass your security.

That’s the real lesson of the liquid bombers. Assuming you’re right and the explosive was real, it was an explosive that none of the security measures at the time would have detected. So why play this slow game of whittling down what people can bring onto airplanes? When do you say: “Enough. It’s not about the details of the tactic; it’s about the broad threat”?

KH: In late 2005, I made a big deal about focusing on Improvised Explosives Devices (IEDs) and not chasing all the things that could be used as weapons. Until the liquids plot this summer, we were defending our decision to let scissors and small tools back on planes and trying to add layers like behavior detection and document checking, so it is ironic that you ask this question—I am in vehement agreement with your premise. We’d rather focus on things that can do catastrophic harm (bombs!) and add layers to get people with hostile intent to highlight themselves. We have a responsibility, though, to address known continued active attack methods like shoes and liquids and, unfortunately, have to use our somewhat clunky process for now.

BS: You don’t have a responsibility to screen shoes; you have one to protect air travel from terrorism to the best of your ability. You’re picking and choosing. We know the Chechnyan terrorists who downed two Russian planes in 2004 got through security partly because different people carried the explosive and the detonator. Why doesn’t this count as a continued, active attack method?

I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers. Or search the Internet for “BeerBelly.” It’s a device you can strap to your chest to smuggle beer into stadiums, but you can also use it smuggle 40 ounces of dangerous liquid explosive onto planes. The magnetometer won’t detect it. Your secondary screening wandings won’t detect it. Why aren’t you making us all take our shirts off? Will you have to find a printout of the webpage in some terrorist safe house? Or will someone actually have to try it? If that doesn’t bother you, search the Internet for “cell phone gun.”

It’s “cover your ass” security. If someone tries to blow up a plane with a shoe or a liquid, you’ll take a lot of blame for not catching it. But if someone uses any of these other, equally known, attack methods, you’ll be blamed less because they’re less public.

KH: Dead wrong! Our security strategy assumes an adaptive terrorist, and that looking backwards is not a reliable predictor of the next type of attack. Yes, we screen for shoe bombs and liquids, because it would be stupid not to directly address attack methods that we believe to be active. Overall, we are getting away from trying to predict what the object looks like and looking more for the other markers of a terrorist. (Don’t forget, we see two million people a day, so we know what normal looks like.) What he/she does; the way they behave. That way we don’t put all our eggs in the basket of catching them in the act. We can’t give them free rein to surveil or do dry-runs; we need to put up obstacles for them at every turn. Working backwards, what do you need to do to be successful in an attack? Find the decision points that show the difference between normal action and action needed for an attack. Our odds are better with this approach than by trying to take away methods, annoying object by annoying object. Bruce, as for blame, that’s nothing compared to what all of us would carry inside if we failed to prevent an attack.

Part 3: The no-fly list

Posted on July 31, 2007 at 6:12 AM100 Comments

Transporting a $1.9M Rare Coin

Excellent story of security by obscurity:

Feigenbaum put the dime, encased in a 3-inch-square block of plastic, in his pocket and, accompanied by a security guard, drove in an ordinary sedan directly to San Jose airport to catch the red-eye to Newark.

The overnight flight, he said, was the only way to make sure the dime would be in New York by the time the buyer’s bank opened in the morning. People who pay $1.9 million for dimes do not like to be kept waiting for them.

Feigenbaum had purchased a coach ticket, to avoid suspicion, but found himself upgraded to first class. That was a worry, because people in flip-flops, T-shirts and grubby jeans do not regularly ride in first class. But it would have been more suspicious to decline a free upgrade. So Feigenbaum forced himself to sit in first class, where he found himself to be the only passenger in flip-flops.

He was too nervous to sleep, he said. He did not watch the in-flight movie, which was “Firehouse Dog.” He turned down a Reuben sandwich and sensibly declined all offers of alcoholic beverages.

Shortly after boarding the plane, he transferred the dime from his pants pocket to his briefcase.

“I was worried that the dime might fall out of my pocket while I was sitting down,” Feigenbaum said.

All across the country, Feigenbaum kept checking to make sure the dime was safe by reaching into his briefcase to feel for it. Feigenbaum did not actually take the dime out of his briefcase, as it is suspicious to stare at dimes.

This isn’t the first time security through obscurity was employed to transport a very small and very valuable object. From Beyond Fear, pp 211-212:

At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.

Like all security measures, security by obscurity has its place. I wrote a lot more about the general concepts in this 2002 essay.

Posted on July 30, 2007 at 4:30 PM42 Comments

Movie-Plot-Threat Presidential Debate Questions

Funny:

Gentlemen, here’s the scenario: As you are flying home from Moscow—having told the world you will never deal with terrorists—hijackers, posing as reporters, seize Air Force One. They vow to kill a hostage every half-hour, including your wife and daughter, until you release a murderous Russian general. I’ll start with Senator Obama. Do you negotiate with the hijackers in the hope of saving lives, or do you flee into the bowels of the craft, then pick them off, one by one, with makeshift shanks and your bare hands?

Candidates, pay attention: An international financier has smuggled an atom bomb into Fort Knox. He loves only gold. Only gold. After an amazing sequence of events, including car chases, sexual conquests, and your defeat of the assassin known as Oddjob, you find yourself staring at the interior of a nuclear device. The final seconds are ticking down. This goes to you, Senator Clinton: Do you cut the blue wire, or do you cut the red wire?

A tornado has transported you to a magical land, where a jubilant throng of midgets greets you as liberator. They direct you toward a road paved with yellow bricks. We’ll start with you, Mayor Giuliani. Would you consider capturing one of these exotic creatures and subjecting him or her to enhanced interrogation techniques, such as waterboarding and electric shock, if it means extracting vital information that will determine whether the yellow route leads home—or into a trap?

More questions in the article.

Posted on July 30, 2007 at 2:51 PM21 Comments

Conversation with Kip Hawley, TSA Administrator (Part 1)

This is Part 1 of a five-part series. Link to whole thing.

In April, Kip Hawley, the head of the Transportation Security Administration (TSA), invited me to Washington for a meeting. Despite some serious trepidation, I accepted. And it was a good meeting. Most of it was off the record, but he asked me how the TSA could overcome its negative image. I told him to be more transparent, and stop ducking the hard questions. He said that he wanted to do that. He did enjoy writing a guest blog post for Aviation Daily, but having a blog himself didn’t work within the bureaucracy. What else could he do?

This interview, conducted in May and June via e-mail, was one of my suggestions.

Bruce Schneier: By today’s rules, I can carry on liquids in quantities of three ounces or less, unless they’re in larger bottles. But I can carry on multiple three-ounce bottles. Or a single larger bottle with a non-prescription medicine label, like contact lens fluid. It all has to fit inside a one-quart plastic bag, except for that large bottle of contact lens fluid. And if you confiscate my liquids, you’re going to toss them into a large pile right next to the screening station—which you would never do if anyone thought they were actually dangerous.

Can you please convince me there’s not an Office for Annoying Air Travelers making this sort of stuff up?

Kip Hawley: Screening ideas are indeed thought up by the Office for Annoying Air Travelers and vetted through the Directorate for Confusion and Complexity, and then we review them to insure that there are sufficient unintended irritating consequences so that the blogosphere is constantly fueled. Imagine for a moment that TSA people are somewhat bright, and motivated to protect the public with the least intrusion into their lives, not to mention travel themselves. How might you engineer backwards from that premise to get to three ounces and a baggie?

We faced a different kind of liquid explosive, one that was engineered to evade then-existing technology and process. Not the old Bojinka formula or other well-understood ones—TSA already trains and tests on those. After August 10, we began testing different variants with the national labs, among others, and engaged with other countries that have sophisticated explosives capabilities to find out what is necessary to reliably bring down a plane.

We started with the premise that we should prohibit only what’s needed from a security perspective. Otherwise, we would have stuck with a total liquid ban. But we learned through testing that that no matter what someone brought on, if it was in a small enough container, it wasn’t a serious threat. So what would the justification be for prohibiting lip gloss, nasal spray, etc? There was none, other than for our own convenience and the sake of a simple explanation.

Based on the scientific findings and a don’t-intrude-unless-needed-for-security philosophy, we came up with a container size that eliminates an assembled bomb (without having to determine what exactly is inside the bottle labeled “shampoo”), limits the total liquid any one person can bring (without requiring Transportation Security Officers (TSOs) to count individual bottles), and allows for additional security measures relating to multiple people mixing a bomb post-checkpoint. Three ounces and a baggie in the bin gives us a way for people to safely bring on limited quantities of liquids, aerosols and gels.

BS: How will this foil a plot, given that there are no consequences to trying? Airplane contraband falls into two broad categories: stuff you get in trouble for trying to smuggle onboard, and stuff that just gets taken away from you. If I’m caught at a security checkpoint with a gun or a bomb, you’re going to call the police and really ruin my day. But if I have a large bottle of that liquid explosive, you confiscate it with a smile and let me though. So unless you’re 100% perfect in catching this stuff—which you’re not—I can just try again and again until I get it through.

This isn’t like contaminants in food, where if you remove 90% of the particles, you’re 90% safer. None of those false alarms—none of those innocuous liquids taken away from innocent travelers—improve security. We’re only safer if you catch the one explosive liquid amongst the millions of containers of water, shampoo, and toothpaste. I have described two ways to get large amounts of liquids onto airplanes—large bottles labeled “saline solution” and trying until the screeners miss the liquid—not to mention combining multiple little bottles of liquid into one big bottle after the security checkpoint.

I want to assume the TSA is both intelligent and motivated to protect us. I’m taking your word for it that there is an actual threat—lots of chemists disagree—but your liquid ban isn’t mitigating it. Instead, I have the sinking feeling that you’re defending us against a terrorist smart enough to develop his own liquid explosive, yet too stupid to read the rules on TSA’s own website.

KH: I think your premise is wrong. There are consequences to coming to an airport with a bomb and having some of the materials taken away at the checkpoint. Putting aside our layers of security for the moment, there are things you can do to get a TSO’s attention at the checkpoint. If a TSO finds you or the contents of your bag suspicious, you might get interviewed and/or have your bags more closely examined. If the TSO throws your liquids in the trash, they don’t find you a threat.

I often read blog posts about how someone could just take all their three-ounce bottles—or take bottles from others on the plane—and combine them into a larger container to make a bomb. I can’t get into the specifics, but our explosives research shows this is not a viable option.

The current system is not the best we’ll ever come up with. In the near future, we’ll come up with an automated system to take care of liquids, and everyone will be happier.

In the meantime, we have begun using hand-held devices that can recognize threat liquids through factory-sealed containers (we will increase their number through the rest of the year) and we have different test strips that are effective when a bottle is opened. Right now, we’re using them on exempt items like medicines, as well as undeclared liquids TSOs find in bags. This will help close the vulnerability and strengthen the deterrent.

BS: People regularly point to security checkpoints missing a knife in their handbag as evidence that security screening isn’t working. But that’s wrong. Complete effectiveness is not the goal; the checkpoints just have to be effective enough so that the terrorists are worried their plan will be uncovered. But in Denver earlier this year, testers sneaked 90% of weapons through. And other tests aren’t much better. Why are these numbers so poor, and why didn’t they get better when the TSA took over airport security?

KH: Your first point is dead on and is the key to how we look at security. The stories about 90% failures are wrong or extremely misleading. We do many kinds of effectiveness tests at checkpoints daily. We use them to guide training and decisions on technology and operating procedures. We also do extensive and very sophisticated Red Team testing, and one of their jobs is to observe checkpoints and go back and figure out—based on inside knowledge of what we do—ways to beat the system. They isolate one particular thing: for example, a particular explosive, made and placed in a way that exploits a particular weakness in technology; our procedures; or the way TSOs do things in practice. Then they will test that particular thing over and over until they identify what corrective action is needed. We then change technology or procedure, or plain old focus on execution. And we repeat the process—forever.

So without getting into specifics on the test results, of course there are times that our evaluations can generate high failure rate numbers on specific scenarios. Overall, though, our ability to detect bomb components is vastly improved and it will keep getting better. (Older scores you may have seen may be “feel good” numbers based on old, easy tests. Don’t go for the sound-bite; today’s TSOs are light-years ahead of even where they were two years ago.)

Part 2: When can we keep our shoes on?

Posted on July 30, 2007 at 6:12 AM183 Comments

See-Through Backpacks Required at School

Talk about your movie-plot threats:

Wissahickon is the most recent district to mandate see-through backpacks, joining several other area suburban districts and private schools as they look to avert tragedies like the 1999 gun killings at Columbine and last December’s gunshot suicide by a student inside Montgomery County’s Springfield Township High School.

Yeah, like that’s going to help.

Posted on July 27, 2007 at 2:31 PM66 Comments

Security Analysis of a 13th Century Venetian Election Protocol

I love stuff like this: “Electing the Doge of Venice: Analysis of a 13th Century Protocol,” by Miranda Mowbray and Dieter Gollmann.

This paper discusses the protocol used for electing the Doge of Venice between 1268 and the end of the Republic in 1797. We will show that it has some useful properties that in addition to being interesting in themselves, also suggest that its fundamental design principle is worth investigating for application to leader election protocols in computer science. For example it gives some opportunities to minorities while ensuring that more popular candidates are more likely to win, and offers some resistance to corruption of voters. The most obvious feature of this protocol is that it is complicated and would have taken a long time to carry out. We will advance a hypothesis as to why it is so complicated, and describe a simplified protocol with very similar features.

Venice was very clever about working to avoid the factionalism that tore apart a lot of its Italian rivals, while making the various factions feel represented.

Posted on July 27, 2007 at 12:08 PM11 Comments

Computer Repair Technicians Accused of Copying Customer Files

We all know that it’s possible, but we assume the people who repair our computers don’t do this:

In recent months, allegations of agents copying pornography, music and alluring photos from customers’ computers have circulated on the Internet. Some bloggers now call it the “Peek Squad.”Any attractive young woman who drops off her computer with the Geek Squad should assume that her photos will be looked at,” said Brett Haddock, a former Geek Squad technician.

Just how much are these people paid? And how much money can you make with a few good identity thefts?

Posted on July 26, 2007 at 3:00 PM

Avian Flu and Disaster Planning

If an avian flu pandemic broke out tomorrow, would your company be ready for it?

Computerworld published a series of articles on that question last year, prompted by a presentation analyst firm Gartner gave at a conference last November. Among Gartner’s recommendations: “Store 42 gallons of water per data center employee—enough for a six-week quarantine—and don’t forget about food, medical care, cooking facilities, sanitation and electricity.”

And Gartner’s conclusion, over half a year later: Pretty much no organizations are ready.

This doesn’t surprise me at all. It’s not that organizations don’t spend enough effort on disaster planning, although that’s true; it’s that this really isn’t the sort of disaster worth planning for.

Disaster planning is critically important for individuals, families, organizations large and small, and governments. For the individual, it can be as simple as spending a few minutes thinking about how he or she would respond to a disaster. For example, I’ve spent a lot of time thinking about what I would do if I lost the use of my computer, whether by equipment failure, theft or government seizure. As a result, I have a pretty complex backup and encryption system, ensuring that 1) I’d still have access to my data, and 2) no one else would. On the other hand, I haven’t given any serious thought to family disaster planning, although others have.

For an organization, disaster planning can be much more complex. What would it do in the case of fire, flood, earthquake, and so on? How would its business survive? The resultant disaster plan might include backup data centers, temporary staffing contracts, planned degradation of services, and a host of other products and service—and consultants to tell you how to use it all.

And anyone who does this kind of thing knows that planning isn’t enough: Testing your disaster plan is critical. Far too often the backup software fails when it has to do an actual restore, or the diesel-powered emergency generator fails to kick in. That’s also the flaw with the emergency kit suggestions I linked to above; if you don’t know how to use a compass or first-aid kit, having one in your car won’t do you much good.

But testing isn’t just valuable because it reveals practical problems with a plan. It also has enormous ancillary benefits for your organization in terms of communication and team building. There’s nothing like a good crisis to get people to rely on each other. Sometimes I think companies should forget about those team-building exercises that involve climbing trees and building fires, and instead pretend that a flood has taken out the primary data center.

It really doesn’t matter what disaster scenario you’re testing. The real disaster won’t be like the test, regardless of what you do, so just pick one and go. Whether you’re an individual trying to recover from a simulated virus attack, or an organization testing its response to a hypothetical shooter in the building, you’ll learn a lot about yourselves and your organization, as well as your plan.

There is a sweet spot, though, in disaster preparedness. Some disasters are too small or too common to worry about. (“We’re out of paper clips!? Call the Crisis Response Team together. I’ll get the Paper Clip Shortage Readiness Program Directive Manual Plan.”) And others are too large or too rare.

It makes no sense to plan for total annihilation of the continent, whether by nuclear or meteor strike: that’s obvious. But depending on the size of the planner, many other disasters are also too large to plan for. People can stockpile food and water to prepare for a hurricane that knocks out services for a few days, but not for a Katrina-like flood that knocks out services for months. Organizations can prepare for losing a data center due to a flood, fire, or hurricane, but not for a Black-Death-scale epidemic that would wipe out a third of the population. No one can fault bond trading firm Cantor Fitzgerald, which lost two thirds of its employees in the 9/11 attack on the World Trade Center, for not having a plan in place to deal with that possibility.

Another consideration is scope. If your corporate headquarters burns down, it’s actually a bigger problem for you than a citywide disaster that does much more damage. If the whole San Francisco Bay Area were taken out by an earthquake, customers of affected companies would be far more likely to forgive lapses in service, or would go the extra mile to help out. Think of the nationwide response to 9/11; the human “just deal with it” social structures kicked in, and we all muddled through.

In general, you can only reasonably prepare for disasters that leave your world largely intact. If a third of the country’s population dies, it’s a different world. The economy is different, the laws are different—the world is different. You simply can’t plan for it; there’s no way you can know enough about what the new world will look like. Disaster planning only makes sense within the context of existing society.

What all of this means is that any bird flu pandemic will very likely fall outside the corporate disaster-planning sweet spot. We’re just guessing on its infectiousness, of course, but (despite the alarmism from two and three years ago), likely scenarios are either moderate to severe absenteeism because people are staying home for a few weeks—any organization ought to be able to deal with that—or a major disaster of proportions that dwarf the concerns of any organization. There’s not much in between.

Honestly, if you think you’re heading toward a world where you need to stash six weeks’ worth of food and water in your company’s closets, do you really believe that it will be enough to see you through to the other side?

A blogger commented on what I said in one article:

Schneier is using what I would call the nuclear war argument for doing nothing. If there’s a nuclear war nothing will be left anyway, so why waste your time stockpiling food or building fallout shelters? It’s entirely out of your control. It’s someone else’s responsibility. Don’t worry about it.

Almost. Bird flu, pandemics, and disasters in general—whether man-made like 9/11, natural like bird flu, or a combination like Katrina—are definitely things we should worry about. The proper place for bird flu planning is at the government level. (These are also the people who should worry about nuclear and meteor strikes.) But real disasters don’t exactly match our plans, and we are best served by a bunch of generic disaster plans and a smart, flexible organization that can deal with anything.

The key is preparedness. Much more important than planning, preparedness is about setting up social structures so that people fall into doing something sensible when things go wrong. Think of all the wasted effort—and even more wasted desire—to do something after Katrina because there was no way for most people to help. Preparedness is about getting people to react when there’s a crisis. It’s something the military trains its soldiers for.

This advice holds true for organizations, families, and individuals as well. And remember, despite what you read about nuclear accidents, suicide terrorism, genetically engineered viruses and mutant man-eating badgers, you live in the safest society in the history of mankind.

This essay originally appeared in Wired.com.

EDITED TO ADD (8/1): A good rebuttal.

Posted on July 26, 2007 at 7:14 AM60 Comments

TSA Warns of Terrorist Dry Runs

A leaked TSA memo warns screeners to be on the lookout for terrorists staging dry runs through airport security. (The TSA issued a short statement following the leak, and here’s an AP story on the memo.)

Honestly, the four incidents described, with photos, sure sound suspicious to me:

  • (U//FOUO) San Diego, July 7. A U.S. person—either a citizen or a foreigner legally here—checked baggage containing two ice packs covered in duct tape. The ice packs had clay inside them rather than the normal blue gel.
  • (U//FOUO) Milwaukee, June 4. A U.S. person’s carryon baggage contained wire coil wrapped around a possible initiator, an electrical switch, batteries, three tubes and two blocks of cheese. The bulletin said block cheese has a consistency similar to some explosives.
  • (U//FOUO) Houston, Nov. 8, 2006. A U.S. person’s checked baggage contained a plastic bag with a 9-volt battery, wires, a block of brown clay-like minerals and pipes.
  • (U//FOUO) Baltimore, Sept. 16, 2006. A couple’s checked baggage contained a plastic bag with a block of processed cheese taped to another plastic bag holding a cellular phone charger.

The cheese and clay are stand-ins for plastic explosive. And honestly, I don’t care if someone is carrying a water bottle, wearing a head scarf, or buying a one-way ticket, but if someone has a block of cheese with wires and a detonator—I want the FBI to be called in.

Note that profiling didn’t seem to help here. Three of the incidents involved U.S. persons, and one is unspecified. Also, according to the report:

Individuals involved in these incidents were of varying gender, and initial investigations do not link them with criminal or terrorist organizations. However, most passengers’ explanations for carrying the suspicious items were questionable, and some investigations are still ongoing.

I wish I had more information on what the “questionable” explanations were.

Flagging suspicious items is what the TSA is supposed to do. Unfortunately, suspicious is a subjective term, and problems arise when screeners aren’t competent enough to distinguish between potentially dangerous and just plain strange. If bulletins like these are accompanied with real training, then we’re getting some actual security out of the TSA.

EDITED TO ADD (7/25): I was quoted in the AP story.

EDITED TO ADD (7/26): At least one of the incidents seems to be bogus.

EDITED TO ADD (7/28): Seems like all four incidents might be bogus:

“That bulletin for law enforcement eyes only told of suspicious items recently found in passenger’s bags at airport checkpoints, warned that they may signify dry runs for terrorist attacks,” CNN’s Brian Todd reported Friday afternoon. “Well it turns out none of that is true.”

[…]

“The FBI now says there were valid explanations for all four incidents in that bulletin, and a US government official says no charges will be brought in any of these cases,” Todd reported.

I’m skeptical. I can’t think of a valid explanation for “wire coil wrapped around a possible initiator, an electrical switch, batteries, three tubes and two blocks of cheese.” I’d like to know what it was.

Posted on July 25, 2007 at 1:55 PM84 Comments

MRI Lie Detectors

Long and interesting article on fMRI lie detectors.

I was particularly struck by this paragraph, about why people are bad at detecting lies:

Maureen O’Sullivan, a deception researcher at the University of San Francisco, studies why humans are so bad at recognizing lies. Many people, she says, base assessments of truthfulness on irrelevant factors, such as personality or appearance. “Baby-faced, non-weird, and extroverted people are more likely to be judged truthful,” she says. (Maybe this explains my trust in Steve Glass.) People are also blinkered by the “truthfulness bias”: the vast majority of questions we ask of other people—the time, the price off the breakfast special—are answered honestly, and truth is therefore our default expectation. Then, there’s the “learning-curve problem.” We don’t have a refined idea of what a successful lie looks and sounds like, since we almost never receive feedback on the fibs that we’ve been told; the co-worker who, at the corporate retreat, assured you that she loved your presentation doesn’t usually reveal later that she hated it. As O’Sullivan puts it, “By definition, the most convincing lies go undetected.”

EDITED TO ADD (8/28): The New York Times has an article on the topic.

Posted on July 25, 2007 at 6:26 AM29 Comments

Truth and Photographs

A really interesting essay on truth and photographs:

In discussing truth and photography, we are asking whether a caption or a belief—whether a statement about a photograph—is true or false about (the things depicted in) the photograph. A caption is like a statement. It trumpets the claim, “This is the Lusitania.” And when we wonder “Is this a photograph of the Lusitania?” we are wondering whether the claim is true or false. The issue of the truth or falsity of a photograph is only meaningful with respect to statements about the photograph. Truth or falsity “adheres” not to the photograph itself but to the statements we make about a photograph. Depending on the statements, our answers change. All alone—shorn of context, without captions—a photograph is neither true nor false.

Posted on July 24, 2007 at 1:52 PM50 Comments

Terrorist Watch List: 20,000 False Alarms

Why does anyone think this makes security sense?

The Justice Department’s proposed budget for 2008 reveals for the first time how often names match against the database, reporting that there were 19,967 “positive matches” in 2006. The TSC had expected to match a far fewer number 14,780. The watch list matched people 5,396 and 15,730 times in 2004 and 2005 respectively.

The report defines a positive match as “one in which an encountered individual is positively matched with an identity in the Terrorist Screening Data Base, or TSDB.”

It’s not clear from the report whether those numbers include individuals whose names only coincidently match one of those on list, such as when Sen. Ted Kennedy was confused with a former IRA terrorist also named Kennedy.

The watch list has been hounded by these mismatches, which have included small children, former presidential candidates, and Americans with common names such as David Nelson.

How do I know they’re all false alarms? Because this administration makes a press splash with every arrest, no matter how scant the evidence is. Do you really think they would pass up a chance to tout how good the watch list is?

EDITED TO ADD (8/28): The Washington Post just got around to writing an article on the topic, and Dan Solove has some good commentary.

Posted on July 23, 2007 at 1:39 PM48 Comments

Ransomware

Computer security people have been talking about this for years, but only recently are we seeing it in the wild: software that encrypts your data, and then charges you for the decryption key.

PandaLabs points out that this is not the first time such a Trojan has made the rounds, citing PGPCoder as having a “long record on the ransomware scene.” Ransom.A is another Trojan that presented to the user both a shorter time frame and a significantly lower bounty—a file was to be deleted every 30 minutes unless the user paid up the ransom of $10.99. Finally, Arhiveus.A also encrypted user files, but instead of demanding money, instead demanded that the user purchase products from an online drug store.

There appears to be no information available regarding what happens when the user attempts to contact the address in the e-mail or whether the alleged decrypting software actually does the job it’s supposed to do. Gostev places a strong warning on his blog, however, saying that if you find yourself infected with Sinowal.FY, Gpcode.ai, or any other type of ransomware, do not pay up “under any circumstances.” It also doesn’t appear as if there is currently any antivirus solution that can help decrypt the files once they are encrypted, although Gostev says that the Kaspersky Lab team is currently working on a decryption routine.

Posted on July 23, 2007 at 6:08 AM33 Comments

More Forged Credentials

I’ve written about forged credentials before, and how hard a problem it is to solve. Here’s another story illustrating the problem:

In an apparent violation of the law, a controverisal aide to ex-Gov. Mitt Romney created phony law enforcement badges that he and other staffers used on the campaign trail to strong-arm reporters, avoid paying tolls and trick security guards into giving them immediate access to campaign venues, sources told the Herald.

When faced with a badge, most people assume it’s legitimate. And even if they wanted to verify the badge, there’s no real way for them to do so.

Posted on July 20, 2007 at 1:37 PM28 Comments

Federal Agents Using Spyware

U.S. drug enforcement agents use key loggers to bypass both PGP and Hushmail encryption:

An agent with the Drug Enforcement Administration persuaded a federal judge to authorize him to sneak into an Escondido, Calif., office believed to be a front for manufacturing the drug MDMA, or Ecstasy. The DEA received permission to copy the hard drives’ contents and inject a keystroke logger into the computers.

That was necessary, according to DEA Agent Greg Coffey, because the suspects were using PGP and the encrypted Web e-mail service Hushmail.com. Coffey asserted that the DEA needed “real-time and meaningful access” to “monitor the keystrokes” for PGP and Hushmail passphrases.

And the FBI used spyware to monitor someone suspected of making bomb threats:

In an affidavit seeking a search warrant to use the software, filed last month in U.S. District Court in the Western District of Washington, FBI agent Norman Sanders describes the software as a “computer and internet protocol address verifier,” or CIPAV.

The full capabilities of the FBI’s “computer and internet protocol address verifier” are closely guarded secrets, but here’s some of the data the malware collects from a computer immediately after infiltrating it, according to a bureau affidavit acquired by Wired News.

  • IP address
  • MAC address of ethernet cards
  • A list of open TCP and UDP ports
  • A list of running programs
  • The operating system type, version and serial number
  • The default internet browser and version
  • The registered user of the operating system, and registered company name, if any
  • The current logged-in user name
  • The last visited URL

Once that data is gathered, the CIPAV begins secretly monitoring the computer’s internet use, logging every IP address to which the machine connects.

All that information is sent over the internet to an FBI computer in Virginia, likely located at the FBI’s technical laboratory in Quantico.

Sanders wrote that the spyware program gathers a wide range of information, including the computer’s IP address; MAC address; open ports; a list of running programs; the operating system type, version and serial number; preferred internet browser and version; the computer’s registered owner and registered company name; the current logged-in user name and the last-visited URL.

The CIPAV then settles into a silent “pen register” mode, in which it lurks on the target computer and monitors its internet use, logging the IP address of every computer to which the machine connects for up to 60 days.

Another article.

I’ve been saying this for a while: the easiest way to get at someone’s communications is not by intercepting it in transit, but by accessing it on the sender’s or recipient’s computers.

EDITED TO ADD (7/20): I should add that the police got a warrant in both cases. This is not a story about abuse of police power or surveillance without a warrant. This is a story about how the police conducts electronic surveillance, and how they bypass security technologies.

Posted on July 20, 2007 at 6:52 AM44 Comments

Enigma Machine for Sale on eBay

A World War II German Enigma machine (three-rotor version) is for sale on eBay right now. At this writing, there have been about 60 bids, and the current price is $20K. This is below the reserve price, which means that the machine won’t sell until it reaches that (secret) price.

It’s expensive, but probably worth it. The Enigma looks like it’s in perfect condition—the seller claims “full working condition with extra lamps”—and includes the manual. All five rotors are included: three in the machine and the other two in a box. The three-rotor version is the most common, but it’s still very rare.

Of course I’d like it for myself—I have a three-rotor Enigma, but it’s missing all its rotors and some of its lamps—but not at that price.

And we can’t see who’s bidding, either. Recently eBay made a change in how it displays auction bids: it hides bidder identities when the auction price gets high. This is to combat “second chance fraud,” where a fraudster contacts a buyer who lost an auction and offers him the same article at the slightly lower losing price, then disappears after receiving payment.

The auction closes in eight days. Good luck.

EDITED TO ADD (7/19): The listing as been pulled; eBay doesn’t say why. The price was $25K after 64 bids when I last saw it; the price was still below the reserve.

EDITED TO ADD (7/20): It’s been relisted. The seller says that the other auction was taking down because of a “problem with pictures” (odd, because the new pictures don’t seem different), and that the reserve price of $28K was met. You can “buy it now” for $50K, or make your best offer. I’m really curious what the final price for this will be—I don’t think it’s worth anywhere near $50K.

EDITED TO ADD (7/20): Sold for $30K. I don’t know why the seller decided to use this alternate eBay system, instead of relisting it as an auction. My guess is that he could have gotten more than $30K if he let the auction run its course over the week.

Posted on July 19, 2007 at 4:45 PM23 Comments

Buildings You Can't Photograph

Very Kafkaesque:

The bottom line is that McCammon was caught in a classic logical trap. If he had only known the building was off-limits to photographers, he would have avoided it. But he was not allowed to know that fact. “Reasonable, law-abiding people tend to avoid these types of things when it can be helped,” McCammon wrote. “Thus, my request for a list of locations within Arlington County that are unmarked, but at which photography is either prohibited or discouraged according to some (public or private) policy. Of course, such a list does not exist. Catch-22.”

The only antidote to this security mania is sunshine. Only when more and more Americans do as McCammon has done and take the time and effort to chronicle these excesses and insist on answers from authorities will we stand a chance of restoring balance and sanity to the blend of liberty and security that we are madly remixing in these confused times.

Here’s the relevent map. It’s the building on the NW/upper-left side of the intersection.

Posted on July 19, 2007 at 2:25 PM36 Comments

The TSA and the Case of the Strange Battery Charger

A TSA screener doesn’t like the look of a homemade battery charger, and refuses to let it on an airplane. Interesting story, both for the escalation procedure the TSA screener followed, and this final observation:

But these are the times we live in. A handful of people with no knowledge of physics, engineering, or pyrotechnics are responsible for determining what is and what is not safe to bring on a plane. They’re paid minimum wage and told to panic if they see something they don’t recognize. Does this make me feel safer? It doesn’t really matter. Implementing real security would bring the cost of flying up, which would likely cause a collapse of the airborne transportation network this country has worked so hard to build up.

The UK banned laptop computers in carry-on luggage for a few days and quickly reversed the idea. The lack of laptops would make the option unattractive to business professionals. Security would cost more than money and many passengers wouldn’t have accepted it.

So the TSA finally let me onto my flight with the two devices they told me they weren’t going to let me take on my flight. They told me the device looked like an I.E.D., then let me on the plane with it.

Does that mean I can bring them on my flight next week?

And that’s the problem: the TSA is both arbitrary and capricious, and it’s impossible to follow the rules because no one knows how they will be applied.

Posted on July 19, 2007 at 6:53 AM54 Comments

Function Creep in London Congestion-Charge Cameras

In London (the system was built for road-fare collection, and is now being used for counterterrorism):

Police are to be given live access to London’s congestion charge cameras—allowing them to track all vehicles entering and leaving the zone.

Anti-terror officers will be exempted from parts of the Data Protection Act to allow them to see the date, time and location of vehicles in real time.

They previously had to apply for access on a case-by-case basis.

I’ll bet you anything that, soon after this data is used for antiterrorism purposes, more exceptions will be put in place for more routine police matters.

EDITED TO ADD (8/16): Well, that didn’t take long.

Posted on July 18, 2007 at 11:40 AM25 Comments

New Harry Potter Book Leaked on BitTorrent

It’s online: digital photographs of every page are available on BitTorrent.

I’ve been fielding press calls on this, mostly from reporters asking me what the publisher could have done differently. Honestly, I don’t think it was possible to keep the book under wraps. There are millions of copies of the book headed to all four corners of the globe. There are simply too many people who must be trusted in order for the security to hold. And all it takes is one untrustworthy person—one truck driver, one bookstore owner, one warehouse worker—to leak the book.

But conversely, I don’t think the publishers should care. Anyone fan-crazed enough to read digital photographs of the pages a few days before the real copy comes out is also someone who is going to buy a real copy. And anyone who will read the digital photographs instead of the real book would have borrowed a copy from a friend. My guess is that the publishers will lose zero sales, and that the pre-release will simply increase the press frenzy.

I’m kind of amazed the book hadn’t leaked sooner.

And, of course, it is inevitable that we’ll get ASCII copies of the book post-publication, for all of you who want to read it on your PDA.

EDITED TO ADD (7/18): I was interviewed for “Future Tense” on this story.

EDITED TO ADD (7/20): This article outlines some of the security measures the publisher took with the manuscript.

EDITED TO ADD (7/25): The camera has a unique serial number embedded in each of the digital photos which might be used to track the author. Just another example of how we leave electronic footprints everywhere we go.

EDITED TO ADD (8/15): Here is a much more comprehensive analysis of who the leaker is:

  • The photographer is Caucasian.
  • The photographer is probably not married (no wedding ring on left hand).
  • The photographer is likely male. In the first few photos, the ring finger appears to be longer than the index finger. This is called the 2D:4D ratio and a lower ratio is symptomatic a high level of testosterone, suggesting a male. However, there is no clear shot of the fingers layed out, so this is not conclusive.
  • Although cameras are usually designed for right-handed use, the photographer uses his left hand to pin down the book. This suggests that the photographer is right handed. (I’ve seen southpaws try to do this sort of thing, and they usually hold the camera in an odd way with their left hand.) However, this too is not conclusive.
  • The photographer’s hand looks young—possibly a teenager or young adult.

Much, much more in the link.

Posted on July 17, 2007 at 4:38 PM62 Comments

Canadians Are Allowed to Say "Bomb" in Airports

Some sense from Canada:

The Canadian Air Transport Safety Authority, trying to clamp down on screeners who alert police every time they hear alarming words, has issued a bulletin urging staff to show more discretion.

A person who announces “You better look through my suitcase carefully, because there’s a bomb in there”, “I am going to set fire to this airplane with this blowtorch” or “The man in seat 32F has a machine gun” will still be arrested.

But someone who remarks “Your hockey team is going to get bombed (badly beaten) tonight”, “Hi Jack!” or “You don’t need to frisk me, I’m not carrying a weapon” will first be warned about their behavior.

Posted on July 17, 2007 at 6:42 AM44 Comments

Security ROI

Interesting essay on security and return on investment (ROI):

Let’s get back to ROI. The major problem the ROSI crowd has is they are trying to speak the language of their managers who select projects based on ROI. There is no problem with selecting projects based on ROI, if the project is a wealth creation project and not a wealth preservation project.

Security managers should be unafraid to avoid using the term ROI, and instead say “My project will cost $1,000 but save the company $10,000.” Saving money / wealth preservation / loss avoidance is good.

Posted on July 14, 2007 at 6:54 AM3 Comments

Friday Squid Blogging: Rare Squid Washes up in Tasmania

A (dead) giant squid of the species Architeuthis washed up on a Tasmanian beach.

The hood of the squid is about two metres long and the body a couple of metres long.

A TPWS spokesman said the tentacles have been badly mangled so their length could not be measured.

Strahan senior ranger Chris Arthur said it is the first time that a giant squid has washed up on the beaches of the west coast, although the giant squid is known to be a food source for sperm whales, which have frequently stranded on the coast.

This article has different sizing:

A squid as long as a bus has washed up on a beach on the west coast of Tasmania.

Measuring eight metres from the tip of its body to the end of its tentacles, the squid weighs about 250 kilograms.

Good picture here; another one here. Another article here. And yet another article and video.

Posted on July 13, 2007 at 4:25 PM11 Comments

Privacy and the "Nothing to Hide" Argument

Good essay:

In this short essay, written for a symposium in the San Diego Law Review, Professor Daniel Solove examines the “nothing to hide” argument. When asked about government surveillance and data mining, many people respond by declaring: “I’ve got nothing to hide.” According to the “nothing to hide” argument, there is no threat to privacy unless the government uncovers unlawful activity, in which case a person has no legitimate justification to claim that it remain private. The “nothing to hide” argument and its variants are quite prevalent, and thus are worth addressing. In this essay, Solove critiques the “nothing to hide” argument and exposes its faulty underpinnings.

Posted on July 13, 2007 at 7:11 AM65 Comments

Correspondent Inference Theory

Two people are sitting in a room together: an experimenter and a subject. The experimenter gets up and closes the door, and the room becomes quieter. The subject is likely to believe that the experimenter’s purpose in closing the door was to make the room quieter.

This is an example of correspondent inference theory. People tend to infer the motives—and also the disposition—of someone who performs an action based on the effects of his actions, and not on external or situational factors. If you see someone violently hitting someone else, you assume it’s because he wanted to—and is a violent person—and not because he’s play-acting. If you read about someone getting into a car accident, you assume it’s because he’s a bad driver and not because he was simply unlucky. And—more importantly for this column—if you read about a terrorist, you assume that terrorism is his ultimate goal.

It’s not always this easy, of course. If someone chooses to move to Seattle instead of New York, is it because of the climate, the culture or his career? Edward Jones and Keith Davis, who advanced this theory in the 1960s and 1970s, proposed a theory of “correspondence” to describe the extent to which this effect predominates. When an action has a high correspondence, people tend to infer the motives of the person directly from the action: e.g., hitting someone violently. When the action has a low correspondence, people tend to not to make the assumption: e.g., moving to Seattle.

Like most cognitive biases, correspondent inference theory makes evolutionary sense. In a world of simple actions and base motivations, it’s a good rule of thumb that allows a creature to rapidly infer the motivations of another creature. (He’s attacking me because he wants to kill me.) Even in sentient and social creatures like humans, it makes a lot of sense most of the time. If you see someone violently hitting someone else, it’s reasonable to assume that he’s a violent person. Cognitive biases aren’t bad; they’re sensible rules of thumb.

But like all cognitive biases, correspondent inference theory fails sometimes. And one place it fails pretty spectacularly is in our response to terrorism. Because terrorism often results in the horrific deaths of innocents, we mistakenly infer that the horrific deaths of innocents is the primary motivation of the terrorist, and not the means to a different end.

I found this interesting analysis in a paper by Max Abrahms in International Security. “Why Terrorism Does Not Work” (.PDF) analyzes the political motivations of 28 terrorist groups: the complete list of “foreign terrorist organizations” designated by the U.S. Department of State since 2001. He lists 42 policy objectives of those groups, and found that they only achieved them 7 percent of the time.

According to the data, terrorism is more likely to work if 1) the terrorists attack military targets more often than civilian ones, and 2) if they have minimalist goals like evicting a foreign power from their country or winning control of a piece of territory, rather than maximalist objectives like establishing a new political system in the country or annihilating another nation. But even so, terrorism is a pretty ineffective means of influencing policy.

There’s a lot to quibble about in Abrahms’ methodology, but he seems to be erring on the side of crediting terrorist groups with success. (Hezbollah’s objectives of expelling both peacekeepers and Israel out of Lebanon counts as a success, but so does the “limited success” by the Tamil Tigers of establishing a Tamil state.) Still, he provides good data to support what was until recently common knowledge: Terrorism doesn’t work.

This is all interesting stuff, and I recommend that you read the paper for yourself. But to me, the most insightful part is when Abrahms uses correspondent inference theory to explain why terrorist groups that primarily attack civilians do not achieve their policy goals, even if they are minimalist. Abrahms writes:

The theory posited here is that terrorist groups that target civilians are unable to coerce policy change because terrorism has an extremely high correspondence. Countries believe that their civilian populations are attacked not because the terrorist group is protesting unfavorable external conditions such as territorial occupation or poverty. Rather, target countries infer the short-term consequences of terrorism—the deaths of innocent civilians, mass fear, loss of confidence in the government to offer protection, economic contraction, and the inevitable erosion of civil liberties—(are) the objects of the terrorist groups. In short, target countries view the negative consequences of terrorist attacks on their societies and political systems as evidence that the terrorists want them destroyed. Target countries are understandably skeptical that making concessions will placate terrorist groups believed to be motivated by these maximalist objectives.

In other words, terrorism doesn’t work, because it makes people less likely to acquiesce to the terrorists’ demands, no matter how limited they might be. The reaction to terrorism has an effect completely opposite to what the terrorists want; people simply don’t believe those limited demands are the actual demands.

This theory explains, with a clarity I have never seen before, why so many people make the bizarre claim that al Qaeda terrorism—or Islamic terrorism in general—is “different”: that while other terrorist groups might have policy objectives, al Qaeda’s primary motivation is to kill us all. This is something we have heard from President Bush again and again—Abrahms has a page of examples in the paper—and is a rhetorical staple in the debate. (You can see a lot of it in the comments to this previous essay.)

In fact, Bin Laden’s policy objectives have been surprisingly consistent. Abrahms lists four; here are six from former CIA analyst Michael Scheuer’s book Imperial Hubris:

  1. End U.S. support of Israel
  2. Force American troops out of the Middle East, particularly Saudi Arabia
  3. End the U.S. occupation of Afghanistan and (subsequently) Iraq
  4. End U.S. support of other countries’ anti-Muslim policies
  5. End U.S. pressure on Arab oil companies to keep prices low
  6. End U.S. support for “illegitimate” (i.e. moderate) Arab governments, like Pakistan

Although Bin Laden has complained that Americans have completely misunderstood the reason behind the 9/11 attacks, correspondent inference theory postulates that he’s not going to convince people. Terrorism, and 9/11 in particular, has such a high correspondence that people use the effects of the attacks to infer the terrorists’ motives. In other words, since Bin Laden caused the death of a couple of thousand people in the 9/11 attacks, people assume that must have been his actual goal, and he’s just giving lip service to what he claims are his goals. Even Bin Laden’s actual objectives are ignored as people focus on the deaths, the destruction and the economic impact.

Perversely, Bush’s misinterpretation of terrorists’ motives actually helps prevent them from achieving their goals.

None of this is meant to either excuse or justify terrorism. In fact, it does the exact opposite, by demonstrating why terrorism doesn’t work as a tool of persuasion and policy change. But we’re more effective at fighting terrorism if we understand that it is a means to an end and not an end in itself; it requires us to understand the true motivations of the terrorists and not just their particular tactics. And the more our own cognitive biases cloud that understanding, the more we mischaracterize the threat and make bad security trade-offs.

This is my 46th essay for Wired.com, based on a paper I blogged about last week (there are a lot of good comments to that blog post).

Posted on July 12, 2007 at 12:59 PM62 Comments

Police Don't Overreact to Strange Object

It’s nice to post a positive story once in a while:

Is it a bird? Is it a bomb? No, it’s the missing ‘bot.

A robot dubbed Seahorse 1, which was stolen days before an international contest, has turned up in a field off Interstate 45 in Dallas.

“Somebody was mowing his grandmother’s yard and thought it was a bomb,” said Nathan Huntoon, an engineering grad student and member of SMU’s robotics team.

The police were delivering the missing machine to SMU Monday afternoon. “We don’t know yet if it’s in working condition,” Mr. Huntoon said.

Sad that this feels like an exception.

Posted on July 11, 2007 at 6:20 AM25 Comments

Story of the Greek Wiretapping Scandal

I’ve blogged a few times about the Greek wiretapping scandal. A system to allow the police to eavesdrop on conversations was abused (surprise, surprise).

Anyway, there’s a really good technical analysis in IEEE Spectrum this month.

On 9 March 2005, a 38-year-old Greek electrical engineer named Costas Tsalikidis was found hanged in his Athens loft apartment, an apparent suicide. It would prove to be merely the first public news of a scandal that would roil Greece for months.

The next day, the prime minister of Greece was told that his cellphone was being bugged, as were those of the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy. [See sidebar “CEOs, MPs, & a PM.”]

The victims were customers of Athens-based Vodafone-Panafon, generally known as Vodafone Greece, the country’s largest cellular service provider; Tsalikidis was in charge of network planning at the company. A connection seemed obvious. Given the list of people and their positions at the time of the tapping, we can only imagine the sensitive political and diplomatic discussions, high-stakes business deals, or even marital indiscretions that may have been routinely overheard and, quite possibly, recorded.

[…]

A study of the Athens affair, surely the most bizarre and embarrassing scandal ever to engulf a major cellphone service provider, sheds considerable light on the measures networks can and should take to reduce their vulnerability to hackers and moles.

It’s also a rare opportunity to get a glimpse of one of the most elusive of cybercrimes. Major network penetrations of any kind are exceedingly uncommon. They are hard to pull off, and equally hard to investigate.

See also blog entries by Matt Blaze, Steve Bellovin, and John Markoff; they make some good security points.

EDITED TO ADD (10/22): More info:

The head of Vodafone Greece told the Government that as soon as it discovered the tapping software, it removed it and notified the authorities. However, the shutdown of the equipment prompted strong criticism of Vodafone because it had prevented the authorities from tracing the taps.

Posted on July 10, 2007 at 12:34 PM16 Comments

Improvised Weapons Out of Newspaper

The Millwall brick:

In the late 1960s—in response to violence at football matches in England—police began confiscating any objects that could be used as weapons. These items included steel combs, pens, beermats, polo mints, shoelaces and even boots.

But not liquids, apparently.

However, fans were still permitted to bring in newspapers. Larger newspapers such as The Guardian or The Financial Times work best for a Millwall brick, and the police looked with suspicion at working class football fans who carried such newspapers. Because of their more innocent appearance, tabloid newspapers became the newspapers of choice for Millwall bricks.

Instructions on how to make one in the link.

When will the TSA start banning newspaper?

Posted on July 9, 2007 at 6:36 AM39 Comments

School Uniforms to Enhance Security?

Look at the last line of this article, about an Ohio town considering mandatory school uniforms in lower grades:

For Edgewood, the primary motivation for adopting uniforms would be to enhance school security, York said.

What is he talking about? Does he think that school uniforms enhance security because it would be easier to spot non-uniform-wearing non-students in the school building and on the grounds? (Of course, non-students with uniforms would have an easier time sneaking in.) Or something else?

Or is security just an excuse for any random thing these days?

Posted on July 5, 2007 at 6:30 AM76 Comments

Airport Security: Israel vs. the United States

A comparison:

We were subjected to a 15-minute interrogation at the airport in Eilat, in southern Israel, after spending the weekend in neighboring Jordan. The young, bespectacled security official was robotic and driven in his questioning. He asked to see a copy of my husband’s invitation to his conference. The full names of anyone we knew in Israel. More and more questions, raising suspicions that started to make me feel guilty.

“Did you give anyone your e-mail or phone number? Did anyone want to stay in contact with you?” He had us pegged for naive travelers who could become the tool of terrorists.

He even went through our digital photos, stopping at a picture of a little boy, holding a baby goat. “Who is this?”

“It’s a Bedouin,” I snapped. “We don’t have his contact information.”

In the same calm tone, he told me not to become angry. Later I realized it was a necessary part of traveling in Israel, as a safety precaution. Ironically, we didn’t have to throw away our water bottles or take off our shoes when we passed through the security gate—which made me wonder at the effectiveness of U.S. policies at airports.

Regularly I hear people talking about Israeli airport security, and asking why we can’t do the same in the U.S. The short answer is: scale. Israel has 11 million airline passengers a year; there are close to 700 million in the U.S. Israel has seven airports; the U.S. has over 400 “primary” airports—and who knows how many others. Things that can work there just don’t scale to the U.S.

Posted on July 3, 2007 at 3:13 PM71 Comments

Why Terrorism Doesn't Work

This is an interesting paper on the efficacy of terrorism:

This study analyzes the political plights of twenty-eight terrorist groups—the complete list of foreign terrorist organizations (FTOs) as designated by the U.S. Department of State since 2001. The data yield two unexpected findings. First, the groups accomplished their forty-two policy objectives only 7 percent of the time. Second, although the groups achieved certain types of policy objectives more than others, the key variable for terrorist success was a tactical one: target selection. Groups whose attacks on civilian targets outnumbered attacks on military targets systematically failed to achieve their policy objectives, regardless of their nature.

The author believes that correspondent inference theory explains this. Basically, the theory says that people infer the motives of an actor based on the consequences of the action. So people assume that the motives of a terrorist are wanton death and destruction, and not the stated aims of the terrorist group:

The theory posited here is that terrorist groups that target civilians are unable to coerce policy change because terrorism has an extremely high correspondence. Countries believe that their civilian populations are attacked not because the terrorist group is protesting unfavorable external conditions such as territorial occupation or poverty. Rather, target countries infer from the short-term consequences of terrorism—the deaths of innocent citizens, mass fear, loss of confidence in the government to offer protection, economic contraction, and the inevitable erosion of civil liberties—the objectives of the terrorist group. In short, target countries view the negative consequences of terrorist attacks on their societies and political systems as evidence that the terrorists want them destroyed. Target countries are understandably skeptical that making concessions will placate terrorist groups believed to be motivated by these maximalist objectives.

This certainly explains a great deal about the U.S.’s reaction to the 9/11 attacks. Many people—along with our politicians and press—believe that al Qaeda terrorism is different, and they’re just out to kill us all. (In fact, I’m sure I’ll get blog comments along those lines.) The paper examines this belief: where it came from, how it manifested itself, and why it is wrong.

Posted on July 3, 2007 at 6:21 AM112 Comments

Terrorist Special Olympics in the UK

First London and then Glasgow. Who are these idiots? Is there a Special Olympics for terrorists going on in the UK this week?

Two points about Glasgow:

One, airport security worked. And two, putting a propane tank into a car and driving into a building at high speed is the sort of thing that only works in old episodes of The A Team. On television, you get a massive, extensive explosion. In real life, you only get a small localized fire.

I am particularly pleased with the reaction from the Scots, which is measured and reasonable. No one was hurt; no need to panic. Life goes on.

On the other hand, who invites their friends to come along on a suicide mission?

Posted on July 2, 2007 at 9:19 AM124 Comments

Robotic Guns

Scary, but philosophically no different than land mines:

Developed by state-owned Rafael, See-Shoot consists of a series of remotely controlled weapon stations which receive fire-control information from ground sensors and manned and unmanned aircraft. Once a target is verified and authorized for destruction, operators sitting safely behind command center computers push a button to fire the weapon.

Posted on July 2, 2007 at 8:42 AM37 Comments

Bioterrorism Detection Systems and False Alarms

Interesting.

It took several days for New Jersey officials to establish that the alert wasn’t the beginning of a deadly bioterror attack, but had been triggered by someone’s allergic reaction to a smallpox vaccine at a local military facility. This false alert came from the government-funded computer program, Biosense. The complex program, which culls electronic health data from 350 of the nation’s urban hospitals as well as veterans’ hospitals and defense department facilities, comes after a string of costly, and never fully realized computer ventures before it. But three years into its development, with a price tag of around $230 million (on top of millions more spent on unsuccessful systems before it), it is unclear as to exactly what the program can accomplish.

EDITED TO ADD (7/2): The article is in Google’s cache.

Posted on July 2, 2007 at 7:54 AM5 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.