Blog: August 2006 Archives

Behavioral Profiling Nabs Warren Jeffs

This is interesting:

A paper license tag, a salad and stories that didn’t make sense pricked the suspicions of a state trooper who stopped the car of a wanted fugitive polygamist in Las Vegas.

But it was the pumping carotid artery in the neck of Warren Steed Jeffs that convinced Nevada Highway Patrolman Eddie Dutchover that he had cornered someone big.

This is behavioral profiling done right, and it reminds me of the Diana Dean story. (Here’s another example of behavioral profiling done right, and here is an article by Malcolm Gladwell on profiling and generalizations.)

Behavioral profiling is tough to do well. It requires intelligent and well-trained officers. Done badly, it quickly defaults to racial profiling. But done well, it’ll do far more to keep us safe than object profiling (e.g., banning liquids on aircraft).

Posted on August 31, 2006 at 1:11 PM53 Comments

Terrorists as Pirates

The Dread Pirate Bin Laden” argues that, legally, terrorists should be treated as pirates under international law:

More than 2,000 years ago, Marcus Tullius Cicero defined pirates in Roman law as hostis humani generis, “enemies of the human race.” From that day until now, pirates have held a unique status in the law as international criminals subject to universal jurisdiction—meaning that they may be captured wherever they are found, by any person who finds them. The ongoing war against pirates is the only known example of state vs. nonstate conflict until the advent of the war on terror, and its history is long and notable. More important, there are enormous potential benefits of applying this legal definition to contemporary terrorism.

[…]

President Bush and others persist in depicting this new form of state vs. nonstate warfare in traditional terms, as with the president’s declaration of June 2, 2004, that “like the Second World War, our present conflict began with a ruthless surprise attack on the United States.” He went on: “We will not forget that treachery and we will accept nothing less than victory over the enemy.” What constitutes ultimate victory against an enemy that lacks territorial boundaries and governmental structures, in a war without fields of battle or codes of conduct? We can’t capture the enemy’s capital and hoist our flag in triumph. The possibility of perpetual embattlement looms before us.

If the war on terror becomes akin to war against the pirates, however, the situation would change. First, the crime of terrorism would be defined and proscribed internationally, and terrorists would be properly understood as enemies of all states. This legal status carries significant advantages, chief among them the possibility of universal jurisdiction. Terrorists, as hostis humani generis, could be captured wherever they were found, by anyone who found them. Pirates are currently the only form of criminals subject to this special jurisdiction.

Second, this definition would deter states from harboring terrorists on the grounds that they are “freedom fighters” by providing an objective distinction in law between legitimate insurgency and outright terrorism. This same objective definition could, conversely, also deter states from cracking down on political dissidents as “terrorists,” as both Russia and China have done against their dissidents.

Recall the U.N. definition of piracy as acts of “depredation [committed] for private ends.” Just as international piracy is viewed as transcending domestic criminal law, so too must the crime of international terrorism be defined as distinct from domestic homicide or, alternately, revolutionary activities. If a group directs its attacks on military or civilian targets within its own state, it may still fall within domestic criminal law. Yet once it directs those attacks on property or civilians belonging to another state, it exceeds both domestic law and the traditional right of self-determination, and becomes akin to a pirate band.

Third, and perhaps most important, nations that now balk at assisting the United States in the war on terror might have fewer reservations if terrorism were defined as an international crime that could be prosecuted before the International Criminal Court.

Ross Anderson recognized the parallels between terrorism and piracy back in 2001.

Posted on August 30, 2006 at 7:57 AM103 Comments

Details on the British Terrorist Arrest

Details are emerging:

  • There was some serious cash flow from someone, presumably someone abroad.
  • There was no imminent threat.
  • However, the threat was real. And it seems pretty clear that it would have bypassed all existing airport security systems.
  • The conspirators were radicalized by the war in Iraq, although it is impossible to say whether they would have been otherwise radicalized without it.
  • They were caught through police work, not through any broad surveillance, and were under surveillance for more than a year.

What pisses me off most is the second item. By arresting the conspirators early, the police squandered the chance to learn more about the network and arrest more of them—and to present a less flimsy case. There have been many news reports detailing how the U.S. pressured the UK government to make the arrests sooner, possibly out of political motivations. (And then Scotland Yard got annoyed at the U.S. leaking plot details to the press, hampering their case.)

My initial comments on the arrest are here. I still think that all of the new airline security measures are an overreaction (This essay makes the same point, as well as describing a 1995 terrorist plot that was remarkably similar in both materials and modus operandi—and didn’t result in a complete ban on liquids.)

As I said on a radio interview a couple of weeks ago: “We ban guns and knives, and the terrorists use box cutters. We ban box cutters and corkscrews, and they hide explosives in their shoes. We screen shoes, and the terrorists use liquids. We ban liquids, and the terrorist will use something else. It’s not a fair game, because the terrorists get to see our security measures before they plan their attack.” And it’s not a game we can win. So let’s stop playing, and play a game we actually can win. The real lesson of the London arrests is that investigation and intelligence work.

EDITED TO ADD (8/29): Seems this URL is unavailable in the U.K. See the comments for ways to bypass the block.

Posted on August 29, 2006 at 7:20 AM81 Comments

Stupid Security Awards Nominations Open

Get your nominations in.

The “Stupid Security Awards” aim to highlight the absurdities of the security industry. Privacy International’s director, Simon Davies, said his group had taken the initiative because of “innumerable” security initiatives around the world that had absolutely no genuine security benefit. The awards were first staged in 2003 and attracted over 5,000 nominations. This will be the second competition in the series.

“The situation has become ridiculous” said Mr Davies. “Security has become the smokescreen for incompetent and robotic managers the world over”.

Unworkable security practices and illusory security measures do nothing to help issues of real public concern. They only hinder the public, intrude unnecessary into our private lives and often reduce us to the status of cattle.

[…]

Privacy International is calling for nominations to name and shame the worst offenders. The competition closes on October 31st 2006. The award categories are:

  • Most Egregiously Stupid Award
  • Most Inexplicably Stupid Award
  • Most Annoyingly Stupid Award
  • Most Flagrantly Intrusive Award
  • Most Stupidly Counter Productive Award

The competition will be judged by an international panel of well-known security experts, public policy specialists, privacy advocates and journalists.

Posted on August 28, 2006 at 7:39 AM21 Comments

USBDumper

USBDumper (article is in French; here’s the software) is a cute little utility that silently copies the contents of an inserted USB drive onto the PC. The idea is that you install this piece of software on your computer, or on a public PC, and then you collect the files—some of them personal and confidential—from anyone who plugs their USB drive into that computer. (This blog post talks about a version that downloads a disk image, allowing someone to recover deleted files as well.)

No big deal to anyone who worries about computer security for a living, but probably a rude shock to salespeople, conference presenters, file sharers, and many others who regularly plug their USB drives into strange PCs.

EDITED TO ADD (10/24): USBDumper 2.2 has been released. The webpage includes a number of other useful utilities.

Posted on August 25, 2006 at 6:47 AM46 Comments

Skype Call Traced

Kobi Alexander fled the United States ten days ago. He was tracked down in Sri Lanka via a Skype call:

According to the report, Alexander was located after making a one-minute call via the online telephone Skype service. The call, made from the Sri Lankan capital Colombo, alerted intelligence agencies to his presence in the country.

Ars Technica explains:

The fugitive former CEO may have been convinced that using Skype made him safe from tracking, but he—and everyone else that believes VoIP is inherently more secure than a landline—was wrong. Tracking anonymous peer-to-peer VoIP traffic over the Internet is possible (PDF). In fact, it can be done even if the parties have taken some steps to disguise the traffic.

Let this be a warning to all of you who thought Skype was anonymous.

Posted on August 24, 2006 at 1:45 PM62 Comments

What the Terrorists Want

On Aug. 16, two men were escorted off a plane headed for Manchester, England, because some passengers thought they looked either Asian or Middle Eastern, might have been talking Arabic, wore leather jackets, and looked at their watches—and the passengers refused to fly with them on board. The men were questioned for several hours and then released.

On Aug. 15, an entire airport terminal was evacuated because someone’s cosmetics triggered a false positive for explosives. The same day, a Muslim man was removed from an airplane in Denver for reciting prayers. The Transportation Security Administration decided that the flight crew overreacted, but he still had to spend the night in Denver before flying home the next day. The next day, a Port of Seattle terminal was evacuated because a couple of dogs gave a false alarm for explosives.

On Aug. 19, a plane made an emergency landing in Tampa, Florida, after the crew became suspicious because two of the lavatory doors were locked. The plane was searched, but nothing was found. Meanwhile, a man who tampered with a bathroom smoke detector on a flight to San Antonio was cleared of terrorism, but only after having his house searched.

On Aug. 16, a woman suffered a panic attack and became violent on a flight from London to Washington, so the plane was escorted to the Boston airport by fighter jets. “The woman was carrying hand cream and matches but was not a terrorist threat,” said the TSA spokesman after the incident.

And on Aug. 18, a plane flying from London to Egypt made an emergency landing in Italy when someone found a bomb threat scrawled on an air sickness bag. Nothing was found on the plane, and no one knows how long the note was on board.

I’d like everyone to take a deep breath and listen for a minute.

The point of terrorism is to cause terror, sometimes to further a political goal and sometimes out of sheer hatred. The people terrorists kill are not the targets; they are collateral damage. And blowing up planes, trains, markets or buses is not the goal; those are just tactics. The real targets of terrorism are the rest of us: the billions of us who are not killed but are terrorized because of the killing. The real point of terrorism is not the act itself, but our reaction to the act.

And we’re doing exactly what the terrorists want.

We’re all a little jumpy after the recent arrest of 23 terror suspects in Great Britain. The men were reportedly plotting a liquid-explosive attack on airplanes, and both the press and politicians have been trumpeting the story ever since.

In truth, it’s doubtful that their plan would have succeeded; chemists have been debunking the idea since it became public. Certainly the suspects were a long way off from trying: None had bought airline tickets, and some didn’t even have passports.

Regardless of the threat, from the would-be bombers’ perspective, the explosives and planes were merely tactics. Their goal was to cause terror, and in that they’ve succeeded.

Imagine for a moment what would have happened if they had blown up 10 planes. There would be canceled flights, chaos at airports, bans on carry-on luggage, world leaders talking tough new security measures, political posturing and all sorts of false alarms as jittery people panicked. To a lesser degree, that’s basically what’s happening right now.

Our politicians help the terrorists every time they use fear as a campaign tactic. The press helps every time it writes scare stories about the plot and the threat. And if we’re terrified, and we share that fear, we help. All of these actions intensify and repeat the terrorists’ actions, and increase the effects of their terror.

(I am not saying that the politicians and press are terrorists, or that they share any of the blame for terrorist attacks. I’m not that stupid. But the subject of terrorism is more complex than it appears, and understanding its various causes and effects are vital for understanding how to best deal with it.)

The implausible plots and false alarms actually hurt us in two ways. Not only do they increase the level of fear, but they also waste time and resources that could be better spent fighting the real threats and increasing actual security. I’ll bet the terrorists are laughing at us.

Another thought experiment: Imagine for a moment that the British government arrested the 23 suspects without fanfare. Imagine that the TSA and its European counterparts didn’t engage in pointless airline-security measures like banning liquids. And imagine that the press didn’t write about it endlessly, and that the politicians didn’t use the event to remind us all how scared we should be. If we’d reacted that way, then the terrorists would have truly failed.

It’s time we calm down and fight terror with antiterror. This does not mean that we simply roll over and accept terrorism. There are things our government can and should do to fight terrorism, most of them involving intelligence and investigation—and not focusing on specific plots.

But our job is to remain steadfast in the face of terror, to refuse to be terrorized. Our job is to not panic every time two Muslims stand together checking their watches. There are approximately 1 billion Muslims in the world, a large percentage of them not Arab, and about 320 million Arabs in the Middle East, the overwhelming majority of them not terrorists. Our job is to think critically and rationally, and to ignore the cacophony of other interests trying to use terrorism to advance political careers or increase a television show’s viewership.

The surest defense against terrorism is to refuse to be terrorized. Our job is to recognize that terrorism is just one of the risks we face, and not a particularly common one at that. And our job is to fight those politicians who use fear as an excuse to take away our liberties and promote security theater that wastes money and doesn’t make us any safer.

This essay originally appeared on Wired.com.

EDITED TO ADD (3/24): Here’s another incident.

EDITED TO ADD (3/29): There have been many more incidents since I wrote this—all false alarms. I’ve stopped keeping a list.

Posted on August 24, 2006 at 7:08 AM296 Comments

Privacy Risks of Public Mentions

Interesting paper: “You are what you say: privacy risks of public mentions,” Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006.

Abstract:

In today’s data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people’s intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn’t work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven’t rated.

Unfortunately, the paper is only available to ACM members.

EDITED TO ADD (8/24): Paper is here.

Posted on August 23, 2006 at 2:11 PM23 Comments

TrackMeNot

In the wake of AOL’s publication of search data, and the New York Times article demonstrating how easy it is to figure out who did the searching, we have TrackMeNot:

TrackMeNot runs in Firefox as a low-priority background process that periodically issues randomized search-queries to popular search engines, e.g., AOL, Yahoo!, Google, and MSN. It hides users’ actual search trails in a cloud of indistinguishable ‘ghost’ queries, making it difficult, if not impossible, to aggregate such data into accurate or identifying user profiles. TrackMeNot integrates into the Firefox ‘Tools’ menu and includes a variety of user-configurable options.

Let’s count the ways this doesn’t work.

One, it doesn’t hide your searches. If the government wants to know who’s been searching on “al Qaeda recruitment centers,” it won’t matter that you’ve made ten thousand other searches as well—you’ll be targeted.

Two, it’s too easy to spot. There are only 1,673 search terms in the program’s dictionary. Here, as a random example, are the program’s “G” words:

gag, gagged, gagging, gags, gas, gaseous, gases, gassed, gasses, gassing, gen, generate, generated, generates, generating, gens, gig, gigs, gillion, gillions, glass, glasses, glitch, glitched, glitches, glitching, glob, globed, globing, globs, glue, glues, gnarlier, gnarliest, gnarly, gobble, gobbled, gobbles, gobbling, golden, goldener, goldenest, gonk, gonked, gonking, gonks, gonzo, gopher, gophers, gorp, gorps, gotcha, gotchas, gribble, gribbles, grind, grinding, grinds, grok, grokked, grokking, groks, ground, grovel, groveled, groveling, grovelled, grovelling, grovels, grue, grues, grunge, grunges, gun, gunned, gunning, guns, guru, gurus

The program’s authors claim that this list is temporary, and that there will eventually be a TrackMeNot server with an ever-changing word list. Of course, that list can be monitored by any analysis program—as could any queries to that server.

In any case, every twelve seconds—exactly—the program picks a random pair of words and sends it to either AOL, Yahoo, MSN, or Google. My guess is that your searches contain more than two words, you don’t send them out in precise twelve-second intervals, and you favor one search engine over the others.

Three, some of the program’s searches are worse than yours. The dictionary includes:

HIV, atomic, bomb, bible, bibles, bombing, bombs, boxes, choke, choked, chokes, choking, chain, crackers, empire, evil, erotics, erotices, fingers, knobs, kicking, harier, hamster, hairs, legal, letterbomb, letterbombs, mailbomb, mailbombing, mailbombs, rapes, raping, rape, raper, rapist, virgin, warez, warezes, whack, whacked, whacker, whacking, whackers, whacks, pistols

Does anyone reall think that searches on “erotic rape,” “mailbombing bibles,” and “choking virgins” will make their legitimate searches less noteworthy?

And four, it wastes a whole lot of bandwidth. A query every twelve seconds translates into 2,400 queries a day, assuming an eight-hour workday. A typical Google response is about 25K, so we’re talking 60 megabytes of additional traffic daily. Imagine if everyone in the company used it.

I suppose this kind of thing would stop someone who has a paper printout of your searches and is looking through them manually, but it’s not going to hamper computer analysis very much. Or anyone who isn’t lazy. But it wouldn’t be hard for a computer profiling program to ignore these searches.

As one commentator put it:

Imagine a cop pulls you over for speeding. As he approaches, you realize you left your wallet at home. Without your driver’s license, you could be in a lot of trouble. When he approaches, you roll down your window and shout. “Hello Officer! I don’t have insurance on this vehicle! This car is stolen! I have weed in my glovebox! I don’t have my driver’s license! I just hit an old lady minutes ago! I’ve been running stop lights all morning! I have a dead body in my trunk! This car doesn’t pass the emissions tests! I’m not allowed to drive because I am under house arrest! My gas tank runs on the blood of children!” You stop to catch a breath, confident you have supplied so much information to the cop that you can’t possibly be caught for not having your license now.

Yes, data mining is a signal-to-noise problem. But artificial noise like this isn’t going to help much. If I were going to improve on this idea, I would make the plugin watch the user’s search patterns. I would make it send queries only to the search engines the user does, only when he is actually online doing things. I would randomize the timing. (There’s a comment to that effect in the code, so presumably this will be fixed in a later version of the program.) And I would make it monitor the web pages the user looks at, and send queries based on keywords it finds on those pages. And I would make it send queries in the form the user tends to use, whether it be single words, pairs of words, or whatever.

But honestly, I don’t know that I would use it even then. The way serious people protect their web-searching privacy is through anonymization. Use Tor for serious web anonymization. Or Black Box Search for simple anonymous searching (here’s a Greasemonkey extension that does that automatically.) And set your browser to delete search engine cookies regularly.

Posted on August 23, 2006 at 6:53 AM88 Comments

Educating Users

I’ve met users, and they’re not fluent in security. They might be fluent in spreadsheets, eBay, or sending jokes over e-mail, but they’re not technologists, let alone security people. Of course, they’re making all sorts of security mistakes. I too have tried educating users, and I agree that it’s largely futile.

Part of the problem is generational. We’ve seen this with all sorts of technologies: electricity, telephones, microwave ovens, VCRs, video games. Older generations approach newfangled technologies with trepidation, distrust and confusion, while the children who grew up with them understand them intuitively.

But while the don’t-get-it generation will die off eventually, we won’t suddenly enter an era of unprecedented computer security. Technology moves too fast these days; there’s no time for any generation to become fluent in anything.

Earlier this year, researchers ran an experiment in London’s financial district. Someone stood on a street corner and handed out CDs, saying they were a “special Valentine’s Day promotion.” Many people, some working at sensitive bank workstations, ran the program on the CDs on their work computers. The program was benign—all it did was alert some computer on the Internet that it was running—but it could just have easily been malicious. The researchers concluded that users don’t care about security. That’s simply not true. Users care about security—they just don’t understand it.

I don’t see a failure of education; I see a failure of technology. It shouldn’t have been possible for those users to run that CD, or for a random program stuffed into a banking computer to “phone home” across the Internet.

The real problem is that computers don’t work well. The industry has convinced everyone that people need a computer to survive, and at the same time it’s made computers so complicated that only an expert can maintain them.

If I try to repair my home heating system, I’m likely to break all sorts of safety rules. I have no experience in that sort of thing, and honestly, there’s no point in trying to educate me. But the heating system works fine without my having to learn anything about it. I know how to set my thermostat and to call a professional if anything goes wrong.

Punishment isn’t something you do instead of education; it’s a form of education—a very primal form of education best suited to children and animals (and experts aren’t so sure about children). I say we stop punishing people for failures of technology, and demand that computer companies market secure hardware and software.

This originally appeared in the April 2006 issue of Information Security Magazine, as the second part of a point/counterpoint with Marcus Ranum. You can read Marcus’s essay here, if you are a subscriber. (Subscriptions are free to “qualified” people.)

EDITED TO ADD (9/11): Here’s Marcus’s half.

Posted on August 22, 2006 at 12:35 PM80 Comments

Ten Worst Privacy Debacles of All Time

Not a bad list:

10. ChoicePoint data spill
9. VA laptop theft
8. CardSystems hacked
7. Discovery of data on used hard drives for sale
6. Philip Agee’s revenge
5. Amy Boyer’s murder
4. Testing CAPPS II
3. COINTELPRO
2. AT&T lets the NSA listen to all phone calls
1. The creation of the Social Security Number

EDITED TO ADD (8/22): Daniel Solove comments.

Posted on August 22, 2006 at 6:19 AM45 Comments

Call Forwarding Credit Card Scam

This is impressive:

A fraudster contacts an AT&T service rep and says he works at a pizza parlor and that the phone is having trouble. Until things get fixed, he requests that all incoming calls be forwarded to another number, which he provides.

Pizza orders are thus routed by AT&T to the fraudster’s line. When a call comes in, the fraudster pretends to take the customer’s order but says payment must be made in advance by credit card.

The unsuspecting customer gives his or her card number and expiration date, and before you can say “extra cheese,” the fraudster is ready to go on an Internet shopping spree using someone else’s money.

Those of us who know security have been telling people not to trust incoming phone calls—that you should call the company if you are going to divulge personal information to them. Seems like that advice isn’t foolproof.

The problem is the phone company, of course. They’re forwarding calls based on an unauthenticated request. AT&T doesn’t really want to talk about details:

He was reluctant to discuss the steps AT&T has taken to improve its call-forwarding system so this sort of thing doesn’t happen again. What, for example, is to prevent someone from convincing AT&T to forward all calls to a local flower store or some other business that takes orders by phone?

“We had some guidelines in place that we believe were effective,” Britton said. “Now we have extra precautions.”

It seems to me that AT&T would solve this problem more quickly if it were liable. Shouldn’t a pizza customer who has been scammed be allowed to sue AT&T? After all, the phone company didn’t route the customer’s calls properly. Does the credit card company have a basis for a suit? Certainly the pizza parlor does, but the effects of AT&T’s sloppy authentication are much greater than a few missed pizza orders.

Posted on August 21, 2006 at 1:35 PM45 Comments

Fraudulent Australian Census Takers

In Australia, criminals are posing as census takers and harvesting personal data for fraudulent purposes.

EDITED TO ADD (8/21): I didn’t notice that this link is from 2001. Sorry about missing that, but it actually makes the story more interesting. This is the sort of identity-theft tactic that I would have expected to see this year, as criminals have gotten more and more sophisticated. It surprises me that they were doing this five years ago as well.

Posted on August 21, 2006 at 6:24 AM17 Comments

Friday Squid Blogging: 13-foot Tentacle Found off Santa Cruz Coast

From KSBY:

A fisherman snags a rare catch in the Santa Barbara Channel, a tentacle belonging to a giant squid.

The 13-foot-long tentacle was found floating near Santa Cruz Island last week. Little is known about the species, it is only the fourth confirmed specimen ever found in Southern California.

“They are definitely present off our coast, they are probably not very common here and they probably live in deep water, this one here was caught near the Santa Cruz Basin, which is several thousand feet deep,” said Eric Hochberg, Santa Barbara Museum of Natural History Curator.

The tentacle likely surfaced after the rest of the squid was eaten by a whale.

Posted on August 18, 2006 at 3:20 PM17 Comments

Behavioral Profiling

I’ve long been a fan of behavioral profiling, as opposed to racial profiling. The U.S. has been testing such a program. While there are legitimate fears that this could end up being racial profiling in disguise, I think this kind of thing is the right idea. (Although I am less impressed with this kind of thing.)

EDITED TO ADD (8/18): Funny cartoon on profiling.

There’s a moral here. Profiling is something we all do, and we do it because—for the most part—it works. But when you’re dealing with an intelligent adversary, as opposed to the cat, you invite that adversary to deliberately try to subvert your profiling system. The effectiveness of any profiling system is directly related to how likely it will be subverted.

Posted on August 18, 2006 at 1:21 PM51 Comments

Human/Bear Security Trade-Off

Interesting example:

Back in the 1980s, Yosemite National Park was having a serious problem with bears: They would wander into campgrounds and break into the garbage bins. This put both bears and people at risk. So the Park Service started installing armored garbage cans that were tricky to open—you had to swing a latch, align two bits of handle, that sort of thing. But it turns out it’s actually quite tricky to get the design of these cans just right. Make it too complex and people can’t get them open to put away their garbage in the first place. Said one park ranger, “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”

It’s a tough balance to strike. People are smart, but they’re impatient and unwilling to spend a lot of time solving the problem. Bears are dumb, but they’re tenacious and are willing to spend hours solving the problem. Given those two constraints, creating a trash can that can both work for people and not work for bears is not easy.

Posted on August 18, 2006 at 7:02 AM67 Comments

Random Bag Searches in Subways

Last year, New York City implemented a program of random bag searches in the subways. It was a silly idea, and I wrote about it then. Recently the U.S. Court of Appeals for the 2nd Circuit upheld the program. Daniel Solove wrote about the ruling:

The 2nd Circuit panel concluded that the program was “reasonable” under the 4th Amendment’s special needs doctrine. Under the special needs doctrine, if there are exceptional circumstances that make the warrant and probable cause requirements unnecessary, then the search should be analyzed in terms of whether it is “reasonable.” Reasonableness is determined by balancing privacy against the government ‘s need. The problem with the 2nd Circuit decision is that under its reasoning, nearly any search, no matter how intrusive into privacy, would be justified. This is because of the way it assesses the government’s side of the balance. When the government’s interest is preventing the detonation of a bomb on a crowded subway, with the potential of mass casualties, it is hard for anything to survive when balanced against it.

The key to the analysis should be the extent to which the search program will effectively improve subway safety. In other words, the goals of the program may be quite laudable, but nobody questions the importance of subway safety. Its weight is so hefty that little can outweigh it. The important issue is whether the search program is a sufficiently effective way of achieving those goals that it is worth the trade-off in civil liberties. On this question, unfortunately, the 2nd Circuit punts. It defers to the law enforcement officials:

That decision is best left to those with “a unique understanding of, and responsibility for, limited public resources, including a finite number of police officers.” Accordingly, we ought not conduct a “searching examination of effectiveness.” Instead, we need only determine whether the Program is “a reasonably effective means of addressing” the government interest in deterring and detecting a terrorist attack on the subway system…

Instead, plaintiffs claim that the Program can have no meaningful deterrent effect because the NYPD employs too few checkpoints. In support of that claim, plaintiffs rely upon various statistical manipulations of the sealed checkpoint data.

We will not peruse, parse, or extrapolate four months’ worth of data in an attempt to divine how many checkpoints the City ought to deploy in the exercise of its day to day police power. Counter terrorism experts and politically accountable officials have undertaken the delicate and esoteric task of deciding how best to marshal their available resources in light of the conditions prevailing on any given day. We will not and may not second guess the minutiae of their considered decisions. (internal citations omitted)

Although courts should not take a “know it all” attitude, they must not defer on such a critical question. The problem with many security measures is that they are not a very wise expenditure of resources. It is costly to have a lot of police officers engage in these random searches when they could be doing other things or money could be spent on other measures. A very small number of random searches in a subway system of over 4 million riders a day seems more symbolic that effective. If courts don’t question the efficacy of security measures in the name of terrorism, then it allows law enforcement officials to win nearly all the time. The government just needs to come into court and say “terrorism” and little else will matter.

Posted on August 16, 2006 at 3:32 PM36 Comments

On the Implausibility of the Explosives Plot

Really interesting analysis of the chemistry involved in the alleged UK terrorist plot:

Based on the claims in the media, it sounds like the idea was to mix H2O2 (hydrogen peroxide, but not the low test kind you get at the pharmacy), H2SO4 (sulfuric acid, of necessity very concentrated for it to work at all), and acetone (known to people worldwide as nail polish remover), to make acetone peroxides. You first have to mix the H2O2 and H2SO4 to get a powerful oxidizer, and then you use it on acetone to get the peroxides, which are indeed explosive.

A mix of H2O2 and H2SO4, commonly called “piranha bath”, is used in orgo labs around the world for cleaning the last traces out of organic material out of glassware when you need it *really* clean—thus, many people who work around organic labs are familiar with it. When you mix it, it heats like mad, which is a common thing when you mix concentrated sulfuric acid with anything. It is very easy to end up with a spattering mess. You don’t want to be around the stuff in general. Here, have a look at a typical warning list from a lab about the stuff:

http://www.mne.umd.edu/LAMP/Sop/Piranha_SOP.htm

Now you may protest “but terrorists who are willing to commit suicide aren’t going to be deterred by being injured while mixing their precursor chemicals!”—but of course, determination isn’t the issue here, getting the thing done well enough to make the plane go boom is the issue. There is also the small matter of explaining to the guy next to you what you’re doing, or doing it in a tiny airplane bathroom while the plane jitters about.

Now, they could of course mix up their oxidizer in advance, but then finding a container to keep the stuff in that isn’t going to melt is a bit of an issue. The stuff reacts violently with *everything*. You’re not going to keep piranha bath in a shampoo bottle—not unless the shampoo bottle was engineered by James Bond’s Q. Glass would be most appropriate, assuming that you could find a way to seal it that wouldn’t be eaten.

Read the whole thing.

EDITED TO ADD (8/16): More speculation.

EDITED TO ADD (8/17): Even more speculation.

Posted on August 16, 2006 at 7:32 AM91 Comments

Review of U.S. Customs and Border Protection Anti-Terrorist Actions

Department of Homeland Security, Office of the Inspector General, “Review of CBP Actions Taken to Intercept Suspected Terrorists at U.S. Ports of Entry,” OIG-06-43, June 2006.

Results in Brief:

CBP has improved information sharing capabilities within the organization to smooth the flow of arriving passengers and increase the effectiveness of limited resources at POEs. Earlier, officers at POEs possessed limited information to help them resolve the identities of individuals mistakenly matched to the terrorist watch list, but a current initiative aims to provide supervisors at POEs with much more information to help them positively identify and clear individuals with names similar to those in the terrorist database. CBP procedures are highly prescriptive and withhold from supervisors the authority to make timely and informed decisions regarding the admissibility of individuals who they could quickly confirm are not the suspected terrorist.

As CBP has stepped up its efforts to intercept known and suspected terrorists at ports of entry, traditional missions such as narcotics interdiction and identification of fraudulent immigration documentation have been adversely affected. Recent data indicates a significant decrease over the past few years in the interception of narcotics and the identification of fraudulent immigration documents, especially at airports.

When a watchlisted or targeted individual is encountered at a POE, CBP generates several reports summarizing the incident. Each of these reports provides a different level of detail, and is distributed to a different readership. It is unclear, however, how details of the encounter and the information obtained from the suspected terrorist are disseminated for analysis. This inconsistent reporting is preventing DHS from developing independent intelligence assessments and may be preventing important information from inclusion in national strategic intelligence analyses.

During an encounter with a watchlisted individual, CBP officers at the POE often need to discuss sensitive details about the individual with law enforcement agencies and CBP personnel in headquarters offices. Some case details are classified. Because some CBP officers at POEs have not been granted the necessary security clearance, they are unable to review important information about a watchlisted individual and may not be able to participate with law enforcement agencies in interviews of certain individuals.

To improve the effectiveness of CBP personnel in their mission to prevent known and suspected terrorists from entering the United States, we are recommending that CBP: expand a biometric information collection program to include volunteers who would not normally provide this information when entering the United States; authorize POE supervisors limited discretion to make more timely admissibility determinations; review port of entry staffing models to ensure the current workforce is able to perform the entire range of CBP mission; establish a policy for more consistent reporting to intelligence agencies the details gathered during secondary interviews; and ensure all counterterrorism personnel at POEs are granted an appropriate security clearance.

Posted on August 15, 2006 at 1:19 PM23 Comments

Stealing Credit Card Information off Phone Lines

Here’s a sophisticated credit card fraud ring that intercepted credit card authorization calls in Phuket, Thailand.

The fraudsters loaded this data onto MP3 players, which they sent to accomplices in neighbouring Malaysia. Cloned credit cards were manufactured in Malaysia and sent back to Thailand, where they were used to fraudulently purchase goods and services.

It’s 2006 and those merchant terminals still don’t encrypt their communications?

Posted on August 15, 2006 at 6:19 AM29 Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache—a.k.a. Jon Ellch—demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card—which they declined to identify—was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk—if the exploit is real—whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers—mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days—giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PM47 Comments

HSBC Insecurity Hype

The Guardian has the story:

One of Britain’s biggest high street banks has left millions of online bank accounts exposed to potential fraud because of a glaring security loophole, the Guardian has learned.

The defect in HSBC’s online banking system means that 3.1 million UK customers registered to use the service have been vulnerable to attack for at least two years. One computing expert called the lapse “scandalous”.

The discovery was made by a group of researchers at Cardiff University, who found that anyone exploiting the flaw was guaranteed to be able to break into any account within nine attempts.

Sounds pretty bad.

But look at this:

The flaw, which is not being detailed by the Guardian, revolves around the way HSBC customers access their web-based banking service. Criminals using so-called “keyloggers” – readily available gadgets or viruses which record every keystroke made on a target computer – can easily deduce the data needed to gain unfettered access to accounts in just a few attempts.

So, the “scandalous” flaw is that an attacker who already has a keylogger installed on someone’s computer can break into his HSBC account. Seems to me if an attacker has a keylogger installed on someone’s computer, then he’s got all sorts of security issues.

If this is the biggest flaw in HSBC’s login authentication system, I think they’re doing pretty good.

Posted on August 14, 2006 at 7:06 AM53 Comments

Last Week's Terrorism Arrests

Hours-long waits in the security line. Ridiculous prohibitions on what you can carry onboard. Last week’s foiling of a major terrorist plot and the subsequent airport security graphically illustrates the difference between effective security and security theater.

None of the airplane security measures implemented because of 9/11—no-fly lists, secondary screening, prohibitions against pocket knives and corkscrews—had anything to do with last week’s arrests. And they wouldn’t have prevented the planned attacks, had the terrorists not been arrested. A national ID card wouldn’t have made a difference, either.

Instead, the arrests are a victory for old-fashioned intelligence and investigation. Details are still secret, but police in at least two countries were watching the terrorists for a long time. They followed leads, figured out who was talking to whom, and slowly pieced together both the network and the plot.

The new airplane security measures focus on that plot, because authorities believe they have not captured everyone involved. It’s reasonable to assume that a few lone plotters, knowing their compatriots are in jail and fearing their own arrest, would try to finish the job on their own. The authorities are not being public with the details—much of the “explosive liquid” story doesn’t hang together—but the excessive security measures seem prudent.

But only temporarily. Banning box cutters since 9/11, or taking off our shoes since Richard Reid, has not made us any safer. And a long-term prohibition against liquid carry-ons won’t make us safer, either. It’s not just that there are ways around the rules, it’s that focusing on tactics is a losing proposition.

It’s easy to defend against what the terrorists planned last time, but it’s shortsighted. If we spend billions fielding liquid-analysis machines in airports and the terrorists use solid explosives, we’ve wasted our money. If they target shopping malls, we’ve wasted our money. Focusing on tactics simply forces the terrorists to make a minor modification in their plans. There are too many targets—stadiums, schools, theaters, churches, the long line of densely packed people before airport security—and too many ways to kill people.

Security measures that require us to guess correctly don’t work, because invariably we will guess wrong. It’s not security, it’s security theater: measures designed to make us feel safer but not actually safer.

Airport security is the last line of defense, and not a very good one at that. Sure, it’ll catch the sloppy and the stupid—and that’s a good enough reason not to do away with it entirely—but it won’t catch a well-planned plot. We can’t keep weapons out of prisons; we can’t possibly keep them off airplanes.

The goal of a terrorist is to cause terror. Last week’s arrests demonstrate how real security doesn’t focus on possible terrorist tactics, but on the terrorists themselves. It’s a victory for intelligence and investigation, and a dramatic demonstration of how investments in these areas pay off.

And if you want to know what you can do to help? Don’t be terrorized. They terrorize more of us if they kill some of us, but the dead are beside the point. If we give in to fear, the terrorists achieve their goal even if they were arrested. If we refuse to be terrorized, then they lose—even if their attacks succeed.

This op ed appeared today in the Minneapolis Star-Tribune.

EDITED TO ADD (8/13): The Department of Homeland Security declares an entire state of matter a security risk. And here’s a good commentary on being scared.

Posted on August 13, 2006 at 8:15 AM131 Comments

DHS Report on US-VISIT and RFID

Department of Homeland Security, Office of the Inspector General, “Enhanced Security Controls Needed For US-VISIT’s System Using RFID Technology (Redacted),” OIG-06-39, June 2006.

From the Executive Summary:

We audited the Department of Homeland Security (DHS) and select organizational components’ security programs to evaluate the effectiveness of controls implemented on Radio Frequency Identification (RFID) systems. Systems employing RFID technology include a tag and reader on the front end and an application and database on the back end.

[…]

Overall, information security controls have been implemented to provide an effective level of security on the Automated Identification Management System (AIDMS). US-VISIT has implemented effective physical security controls over the RFID tags, readers, computer equipment, and database supporting the RFID system at the POEs visited. No personal information is stored on the tags used for US-VISIT. Travelers’ personal information is maintained in and can be obtained only with access to the system’s database. Additional security controls would need to be implemented if US-VISIT decides to store travelers’ personal information on RFID-enabled forms or migrates to universally readable Generation 2 (Gen2) products.

Although these controls provide overall system security, US-VISIT has not properly configured its AIDMS database to ensure that data captured and stored is properly protected. Furthermore, while AIDMS is operating with an Authority to Operate, US-VISIT had not tested its contingency plan to ensure that critical operations could be restored in the event of a disruption. In addition, US-VISIT has not developed its own RFID policy or ensured that the standard operating procedures are properly distributed and followed at all POEs.

I wrote about US-VISIT in 2004 and again in 2006. In that second essay, I gave a price of $15B. I have since come to not believe that data, and I don’t have any better information on the price. But I still think my analysis holds. I would much rather take the money spent on US-VISIT and spend it on intelligence and investigation, the kind of security that resulted in the U.K. arrests earlier this week and is likely to actually make us safer.

Posted on August 11, 2006 at 7:27 AM20 Comments

New Airline Security Rules

The foiled UK terrorist plot has wreaked havoc with air travel in the country:

All short-haul inbound flights to Heathrow airport have been cancelled. Some flights in and out of Gatwick have been suspended.

Security has been increased at Channel ports and the Eurotunnel terminal.

German carrier Lufthansa has cancelled flights to Heathrow and the Spanish airline Iberia has stopped UK flights.

British Airways has announced it has cancelled all its short-haul flights to and from Heathrow for the rest of Thursday.

The airline added that it was also cancelling some domestic and short haul services in and out of Gatwick airport during the remainder of the day.

In addition, pretty much no carry-ons are allowed:

These measures will prevent passengers from carrying hand luggage into the cabin of an aircraft with the following exceptions (which must be placed in a plastic bag):

  • Pocket size wallets and pocket size purses plus contents (for example money, credit cards, identity cards etc (not handbags);
  • Travel documents essential for the journey (for example passports and travel tickets);
  • Prescription medicines and medical items sufficient and essential for the flight (e.g. diabetic kit), except in liquid form unless verified as authentic;
  • Spectacles and sunglasses, without cases;
  • Contact lens holders, without bottles of solution;
  • For those traveling with an infant: baby food, milk (the contents of each bottle must be tasted by the accompanying passenger);
  • Sanitary items sufficient and essential for the flight (nappies, wipes, creams and nappy disposal bags);
  • Female sanitary items sufficient and essential for the flight, if unboxed (e.g. tampons, pads, towels and wipes) tissues (unboxed) and/or handkerchiefs;
  • Keys (but no electrical key fobs)

Across the Atlantic, the TSA has announced new security rules:

Passengers are not allowed to have gels or liquids of any kind at screening points or in the cabin of any airplane.

They said this includes beverages, food, suntan lotion, creams toothpaste, hair gel, or similar items. Those items must be packed into checked luggage. Beverages bought on the secure side of the checkpoint must be disposed of before boarding the plane.

There are several exceptions to the new rule. Baby formula, breast milk, or juice for small children, prescription medications where the name matched the name of a ticked passenger, as well as insulin and other essential health items may be brought onboard the plane.

See the TSA rules for more detail.

Given how little we know of the extent of the plot, these don’t seem like ridiculous short-term measures. I’m sure glad I’m not flying anywhere this week.

EDITED TO ADD (8/10): Interesting analysis by Eric Rescorla.

Posted on August 10, 2006 at 7:40 AM259 Comments

Doping in Professional Sports

The big news in professional bicycle racing is that Floyd Landis may be stripped of his Tour de France title because he tested positive for a banned performance-enhancing drug. Sidestepping the entire issue of whether professional athletes should be allowed to take performance-enhancing drugs, how dangerous those drugs are, and what constitutes a performance-enhancing drug in the first place, I’d like to talk about the security and economic issues surrounding the issue of doping in professional sports.

Drug testing is a security issue. Various sports federations around the world do their best to detect illegal doping, and players do their best to evade the tests. It’s a classic security arms race: improvements in detection technologies lead to improvements in drug detection evasion, which in turn spur the development of better detection capabilities. Right now, it seems that the drugs are winning; in places, these drug tests are described as “intelligence tests”: if you can’t get around them, you don’t deserve to play.

But unlike many security arms races, the detectors have the ability to look into the past. Last year, a laboratory tested Lance Armstrong’s urine and found traces of the banned substance EPO. What’s interesting is that the urine sample tested wasn’t from 2005; it was from 1999. Back then, there weren’t any good tests for EVO in urine. Today there are, and the lab took a frozen urine sample—who knew that labs save urine samples from athletes?—and tested it. He was later cleared—the lab procedures were sloppy—but I don’t think the real ramifications of the episode were ever well understood. Testing can go back in time.

This has two major effects. One, doctors who develop new performance-enhancing drugs may know exactly what sorts of tests the anti-doping laboratories are going to run, and they can test their ability to evade drug detection beforehand. But they cannot know what sorts of tests will be developed in the future, and athletes cannot assume that just because a drug is undetectable today it will remain so years later.

Two, athletes accused of doping based on years-old urine samples have no way of defending themselves. They can’t resubmit to testing; it’s too late. If I were an athlete worried about these accusations, I would deposit my urine “in escrow” on a regular basis to give me some ability to contest an accusation.

The doping arms race will continue because of the incentives. It’s a classic Prisoner’s Dilemma. Consider two competing athletes: Alice and Bob. Both Alice and Bob have to individually decide if they are going to take drugs or not.

Imagine Alice evaluating her two options:

“If Bob doesn’t take any drugs,” she thinks, “then it will be in my best interest to take them. They will give me a performance edge against Bob. I have a better chance of winning.

“Similarly, if Bob takes drugs, it’s also in my interest to agree to take them. At least that way Bob won’t have an advantage over me.

“So even though I have no control over what Bob chooses to do, taking drugs gives me the better outcome, regardless of what his action.”

Unfortunately, Bob goes through exactly the same analysis. As a result, they both take performance-enhancing drugs and neither has the advantage over the other. If they could just trust each other, they could refrain from taking the drugs and maintain the same non-advantage status—without any legal or physical danger. But competing athletes can’t trust each other, and everyone feels he has to dope—and continues to search out newer and more undetectable drugs—in order to compete. And the arms race continues.

Some sports are more vigilant about drug detection than others. European bicycle racing is particularly vigilant; so are the Olympics. American professional sports are far more lenient, often trying to give the appearance of vigilance while still allowing athletes to use performance-enhancing drugs. They know that their fans want to see beefy linebackers, powerful sluggers, and lightning-fast sprinters. So, with a wink and a nod, they only test for the easy stuff.

For example, look at baseball’s current debate on human growth hormone: HGH. They have serious tests, and penalties, for steroid use, but everyone knows that players are now taking HGH because there is no urine test for it. There’s a blood test in development, but it’s still some time away from working. The way to stop HGH use is to take blood tests now and store them for future testing, but the players’ union has refused to allow it and the baseball commissioner isn’t pushing it.

In the end, doping is all about economics. Athletes will continue to dope because the Prisoner’s Dilemma forces them to do so. Sports authorities will either improve their detection capabilities or continue to pretend to do so—depending on their fans and their revenues. And as technology continues to improve, professional athletes will become more like deliberately designed racing cars.

This essay originally appeared on Wired.com.

Posted on August 10, 2006 at 5:18 AM66 Comments

Technological Arbitrage

This is interesting. Seems that a group of Sri Lankan credit card thieves collected the data off a bunch of UK chip-protected credit cards.

All new credit cards in the UK come embedded come with RFID chips that contain different pieces of user information, in order to access the account and withdraw cash the ATMs has to verify both the magnetic strip and the RFID tag. Without this double verification the ATM will confiscate the card, and possibly even notify the police.

They’re not RFID chips, they’re normal smart card chips that require physical contact—but that’s not the point.

They couldn’t clone the chips, so they took the information off the magnetic stripe and made non-chip cards. These cards wouldn’t work in the UK, of course, so the criminals flew down to India where the ATMs only verify the magnetic stripe.

Backwards compatibility is often incompatible with security. This is a good example, and demonstrates how criminals can make use of “technological arbitrage” to leverage compatibility.

EDITED TO ADD (8/9): Facts corrected above.

Posted on August 9, 2006 at 6:32 AM29 Comments

AOL Releases Massive Amount of Search Data

From TechCrunch:

AOL has released very private data about its users without their permission. While the AOL username has been changed to a random ID number, the ability to analyze all searches by a single user will often lead people to easily determine who the user is, and what they are up to. The data includes personal names, addresses, social security numbers and everything else someone might type into a search box.

The most serious problem is the fact that many people often search on their own name, or those of their friends and family, to see what information is available about them on the net. Combine these ego searches with porn queries and you have a serious embarrassment. Combine them with “buy ecstasy” and you have evidence of a crime. Combine it with an address, social security number, etc., and you have an identity theft waiting to happen. The possibilities are endless.

This is search data for roughly 658,000 anonymized users over a three month period from March to May—about 1/3 of 1 per cent of their total data for that period.

Now AOL says it was all a mistake. They pulled the data, but it’s still still out there—and probably will be forever. And there’s some pretty scary stuff in it.

You can read more on Slashdot and elsewhere.

Anyone who wants to play NSA can start datamining for terrorists. Let us know if you find anything.

EDITED TO ADD (8/9): The New York Times:

And search by search, click by click, the identity of AOL user No. 4417749 became easier to discern. There are queries for “landscapers in Lilburn, Ga,” several people with the last name Arnold and “homes sold in shadow lake subdivision gwinnett county georgia.”

It did not take much investigating to follow that data trail to Thelma Arnold, a 62-year-old widow who lives in Lilburn, Ga., frequently researches her friends’ medical ailments and loves her three dogs. “Those are my searches,” she said, after a reporter read part of the list to her.

Posted on August 8, 2006 at 11:02 AM41 Comments

Malware Distribution Project

In case you needed a comprehensive database of malware.

Malware Distribution Project (MD:Pro) offers developers of security systems and anti-malware products a vast collection of downloadable malware from a secure and reliable source, exclusively for the purposes of analysis, testing, research and development.

Bringing together for the first time a large back-catalogue of malware, computer underground related information and IT security resources under one project, this major new system also contains a large selection of undetected malware, along with an open, collaborative platform, where malware samples can be shared among its members. The database is constantly updated with new files, and maintained to keep it running at an optimum.

There are currently 271712 files in the system.

This isn’t free. You can subscribe at 1,250 euros for a month, or 13,500 euros a year. (There are cheaper packages with less comprehensive access.)

They claim to have a stringent vetting process, ensuring that only legitimate researchers have access to this database:

It should be noted that we are not a malware/VX distribution site, nor do we condone the public spreading and/or distribution of such information, hence we will be vetting our registrants stringently. We do appreciate that this puts a severe restriction on private (individual) malware researchers and enthusiasts with limited or no budget, but we do feel that providing free malware for public research is out of the scope of this project.

EDITED TO ADD (8/8): The hacker group Cult of the Dead Cow also has a malware repository, free and with looser access restrictions.

Posted on August 8, 2006 at 7:56 AM12 Comments

Printer Security

At BlackHat last week, Brendan O’Connor warned about the dangers of insecure printers:

“Stop treating them as printers. Treat them as servers, as workstations,” O’Connor said in his presentation on Thursday. Printers should be part of a company’s patch program and be carefully managed, not forgotten by IT and handled by the most junior person on staff, he said.

I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.

Once a printer was under his control, O’Connor said he would be able to use it to map an organization’s internal network—a situation that could help stage further attacks. The breach gave him access to any of the information printed, copied or faxed from the device. He could also change the internal job counter—which can reduce, or increase, a company’s bill if the device is leased, he said.

The printer break-in also enables a number of practical jokes, such as sending print and scan jobs to arbitrary workers’ desktops, O’Connor said. Also, devices could be programmed to include, for example, an image of a paper clip on every print, fax or copy, ultimately driving office staffers to take the machine apart looking for the paper clip.

Getting copies of all printed documents is definitely a security vulnerability, but I think the biggest threat is that the printers are inside the network, and are a more-trusted launching pad for onward attacks.

One of the weaknesses in the Xerox system is an unsecured boot loader, the technology that loads the basic software on the device, O’Connor said. Other flaws lie in the device’s Web interface and in the availability of services such as the Simple Network Management Protocol and Telnet, he said.

O’Connor informed Xerox of the problems in January. The company did issue a fix for its WorkCentre 200 series, it said in a statement. “Thanks to Brendan’s efforts, we were able to post a patch for our customers in mid-January which fixes the issues,” a Xerox representative said in an e-mailed statement.

One of the reasons this is a particularly nasty problem is that people don’t update their printer software. Want to bet approximately 0% of the printer’s users installed that patch? And what about printers whose code can’t be patched?

EDITED TO ADD (8/7): O’Connor’s name corrected.

Posted on August 7, 2006 at 10:59 AM34 Comments

Friday Squid Blogging: Giant Robotic Squid

It’s being built in a Japanese town:

…on July 18, a group of Hakodate residents made an official announcement regarding plans to create a giant robotic squid for the city.

[…]

…cutting-edge technology…will allow the robot to be controlled remotely via the Internet.

[…]

…the entire body of squid robot will be covered in lights that blink as the robot moves. In addition, the robot will be equipped with a set of wireless receivers and will have its own homepage featuring a set of controls that allow remote users to move the robot’s tentacles and eyes.

The developers plan for the robot to stand 5 meters (16 feet) in height. After an initial 1.5-meter prototype is completed this November, work will begin on the larger final version, which the group aims to unveil in a parade at the Hakodate Port Festival in the summer of 2007.

[…]

The total cost of the robot is expected to be somewhere in the 30 million yen range (US$250,000).

Posted on August 4, 2006 at 4:24 PM7 Comments

Bank Bans Cell Phones

Not because they’re annoying, but as a security measure:

Cell phones have been banned inside the five branches of the First National Bank in the Chicago area, to enhance security.

Even using a cell phone in the bank’s lobby may result in the person being asked to leave the premises.

“We ban cell phone use in the lobby because you don’t know what people are doing,” Ralph Oster, a senior vice president, told the Chicago Tribune. Cell phone cameras are also a worry.

Oster said there have been holdups in which bandits were on the phone with lookouts outside while committing bank robberies.

“You’re trying to stop that communication,” he says.

Banks in Mexico City banned call phones in May and Citizens Financial Bank of Munster, Ind., asks customers to turn off their cell phones.

West Suburban Bank, based in Lombard, Ill., barred customers wearing hats in January but has not moved to silence cell phones.

This is just plain dumb. It’s easy to get around the ban: a Bluetooth earpiece is inconspicuous enough. Or a couple of earbuds that look like an iPod. Or an SMS device. It only has to work at the beginning. After all, once you start actually robbing the bank, a ban isn’t going to deter you from using your cell phone.

Posted on August 4, 2006 at 3:11 PM34 Comments

Open Voting Foundation Releases Huge Diebold Voting Machine Flaw

It’s on their website:

“Diebold has made the testing and certification process practically irrelevant,” according to Dechert. “If you have access to these machines and you want to rig an election, anything is possible with the Diebold TS—and it could be done without leaving a trace. All you need is a screwdriver.” This model does not produce a voter verified paper trail so there is no way to check if the voter’s choices are accurately reflected in the tabulation.

Open Voting Foundation is releasing 22 high-resolution close up pictures of the system. This picture, in particular, shows a “BOOT AREA CONFIGURATION” chart painted on the system board.

The most serious issue is the ability to choose between “EPROM” and “FLASH” boot configurations. Both of these memory sources are present. All of the switches in question (JP2, JP3, JP8, SW2 and SW4) are physically present on the board. It is clear that this system can ship with live boot profiles in two locations, and switching back and forth could change literally everything regarding how the machine works and counts votes. This could be done before or after the so-called “Logic And Accuracy Tests”.

If this is true, this is an enormously big deal.

Posted on August 4, 2006 at 11:27 AM61 Comments

Hackers Clone RFID Passports

It was demonstrated today at the BlackHat conference.

Grunwald says it took him only two weeks to figure out how to clone the passport chip. Most of that time he spent reading the standards for e-passports that are posted on a website for the International Civil Aviation Organization, a United Nations body that developed the standard. He tested the attack on a new European Union German passport, but the method would work on any country’s e-passport, since all of them will be adhering to the same ICAO standard.

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker—Walluf, Germany-based ACG Identification Technologies—but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports—called Golden Reader Tool and made by secunet Security Networks—and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader—which can also act as a writer—and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

I’ve long been opposed (that last link is an op-ed from The International Herald-Tribune) to RFID chips in passports, although last year I—mistakenly—withdrew my objections based on the security measures the State Department was taking.

That’s silly. I’m not opposed to chips on ID cards, I am opposed to RFID chips. My fear is surreptitious access: someone could read the chip and learn your identity without your knowledge or consent.

Sure, the State Department is implementing security measures to prevent that. But as we all know, these measures won’t be perfect. And a passport has a ten-year lifetime. It’s sheer folly to believe the passport security won’t be hacked in that time. This hack took only two weeks!

The best way to solve a security problem is not to have it at all. If there’s an RFID chip on your passport, or any of your identity cards, you have to worry about securing it. If there’s no RFID chip, then the security problem is solved.

Until I hear a compelling case for why there must be an RFID chip on a passport, and why a normal smart-card chip can’t do, I am opposed to the idea.

Crossposted to the ACLU blog.

Posted on August 3, 2006 at 3:45 PM65 Comments

A Month of Browser Bugs

To kick off his new Browser Fun blog, H.D. Moore began with “A Month of Browser Bugs”:

This blog will serve as a dumping ground for browser-based security research and vulnerability disclosure. To kick off this blog, we are announcing the Month of Browser Bugs (MoBB), where we will publish a new browser hack, every day, for the entire month of July. The hacks we publish are carefully chosen to demonstrate a concept without disclosing a direct path to remote code execution. Enjoy!

Thirty-one days, and thirty-one hacks later, the blog lists exploits against all the major browsers:

  • Internet Explorer: 25
  • Mozilla: 2
  • Safari: 2
  • Opera: 1
  • Konqueror: 1

My guess is that he could have gone on for another month without any problem, and possibly could produce a new browser bug a day indefinitely.

The moral here isn’t that IE is less secure than the other browsers, although I certainly believe that. The moral is that coding standards are so bad that security flaws are this common.

Eric Rescorla argues that it’s a waste of time to find and fix new security holes, because so many of them still remain and the software’s security isn’t improved. I think he has a point. (Note: this is not to say that it’s a waste of time to fix the security holes found and publicly exploited by the bad guys. The question Eric tries to answer is whether or not it is worth it for the security community to find new security holes.)

Another commentary is here.

Posted on August 3, 2006 at 1:53 PM24 Comments

Anti-Missile Defenses for Passenger Aircraft

It’s not happening anytime soon:

Congress agreed to pay for the development of the systems to protect the planes from such weapons, but balked at proposals to spend the billions needed to protect all 6,800 commercial U.S. airliners.

Probably for the best, actually. One, there are far more effective ways to spend that money on counterterrorism. And two, they’re only effective against a particular type of missile technology:

Both BAE and Northrop systems use lasers to jam the guidance systems of incoming missiles, which lock onto the heat of an aircraft’s engine.

Posted on August 3, 2006 at 7:30 AM30 Comments

Britain Adopts Threat Levels

Taking a cue from a useless American idea, the UK has announced a system of threat levels:

“Threat levels are designed to give a broad indication of the likelihood of a terrorist attack,” the intelligence.gov.uk website said in a posting. “They are based on the assessment of a range of factors including current intelligence, recent events and what is known about terrorist intentions and capabilities. This information may well be incomplete and decisions about the appropriate security response are made with this in mind.”

Unlike the previous secret grading system offering seven levels of threat, the new system has been simplified to five, starting with “low,” meaning an attack is unlikely, to “critical,” meaning an attack is expected imminently. Unlike American threat assessments, the British system is not color-coded.

The current level is “severe”:

“Severe” is the second-highest threat level, but the Web site did not say what kind of attack was likely. The assessment is roughly the same as it has been for a year.

I wrote about the stupidity of this sort of system back in 2004:

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans?as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

[…]

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared.

[…]

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

The Bush administration used this system largely as a political tool. Perhaps Tony Blair has the same idea.

Crossposted to the ACLU blog.

Posted on August 2, 2006 at 4:01 PM41 Comments

Brute Forcing Combination Locks

This computerized servomotor opens combination locks by brute forcing all the combinations. This isn’t particular surprising, but it is nice to see some actually build one.

What’s more interesting is the link describing how to open a common Master brand lock in about 10 minutes. The design makes those 403 possible combinations collapse to 121. It’s a physical metaphor for bad cryptography.

Posted on August 2, 2006 at 1:54 PM29 Comments

Why the Top-Selling Antivirus Programs Aren't the Best

The top three antivirus programs—from Symantec, McAfee, and Trend Micro—are less likely to detect new viruses and worms than less popular programs, because virus writers specifically test their work against those programs:

On Wednesday, the general manager of Australia’s Computer Emergency Response Team (AusCERT), Graham Ingram, described how the threat landscape has changed—along with the skill of malware authors.

“We are getting code of a quality that is probably worthy of software engineers. Not application developers but software engineers,” said Ingram.

However, the actual reason why the top selling antivirus applications don’t work is because malware authors are specifically testing their Trojans and viruses to make sure they can bypass these applications before releasing them in the wild.

It’s interesting to watch the landscape change, as malware becomes less the province of hackers and more the province of criminals. This is one move in a continuous arms race between attacker and defender.

Posted on August 2, 2006 at 6:41 AM43 Comments

Updating the Traditional Security Model

On the Firewall Wizards mailing list last year, Dave Piscitello made a fascinating observation. Commenting on the traditional four-step security model:

Authentication (who are you)
Authorization (what are you allowed to do)
Availability (is the data accessible)
Authenticity (is the data intact)

Piscitello said:

This model is no longer sufficient because it does not include asserting the trustworthiness of the endpoint device from which a (remote) user will authenticate and subsequently access data. Network admission and endpoint control are needed to determine that the device is free of malware (esp. key loggers) before you even accept a keystroke from a user. So let’s prepend “admissibility” to your list, and come up with a 5-legged stool, or call it the Pentagon of Trust.

He’s 100% right.

Posted on August 1, 2006 at 2:03 PM54 Comments

Security and Monoculture

Interesting research.

EDITED TO ADD (8/1): The paper is only viewable by subscribers. Here are some excerpts:

Fortunately, buffer-overflow attacks have a weakness: the intruder must know precisely what part of the computer’s memory to target. In 1996, Forrest realised that these attacks could be foiled by scrambling the way a program uses a computer’s memory. When you launch a program, the operating system normally allocates the same locations in a computer’s random access memory (RAM) each time. Forrest wondered whether she could rewrite the operating system to force the program to use different memory locations that are picked randomly every time, thus flummoxing buffer-overflow attacks.

To test her concept, Forrest experimented with a version of the open-source operating system Linux. She altered the system to force programs to assign data to memory locations at random. Then she subjected the computer to several well-known attacks that used the buffer-overflow technique. None could get through. Instead, they targeted the wrong area of memory. Although part of the software would often crash, Linux would quickly restart it, and get rid of the virus in the process. In rare situations it would crash the entire operating system, a short-lived annoyance, certainly, but not bad considering the intruder had failed to take control of the machine.

Linux computer-security experts quickly picked up on Forrest’s idea. In 2003 Red Hat, the maker of a popular version of Linux, began including memory-space randomisation in its products. “We had several vulnerabilities which we could downgrade in severity,” says Marc J. Cox, a Red Hat security expert.

[…]

Memory scrambling isn’t the only way to add diversity to operating systems. Even more sophisticated techniques are in the works. Forrest has tried altering “instruction sets”, commands that programs use to communicate with a computer’s hardware, such as its processor chip or memory.

Her trick was to replace the “translator” program that interprets these instruction sets with a specially modified one. Every time the computer boots up, Forrest’s software loads into memory and encrypts the instruction sets in the hardware using a randomised encoding key. When a program wants to send a command to the computer, Forrest’s translator decrypts the command on the fly so the computer can understand it.

This produces an elegant form of protection. If an attacker manages to insert malicious code into a running program, that code will also be decrypted by the translator when it is passed to the hardware. However, since the attacker’s code is not encrypted in the first place, the decryption process turns it into digital gibberish so the computer hardware cannot understand it. Since it exists only in the computer’s memory and has not been written to the computer’s hard disc, it will vanish upon reboot.

Forrest has tested the process on several versions of Linux while launching buffer-overflow attacks. None were able to penetrate. As with memory randomisation, the failed attacks would, at worst, temporarily crash part of Linux – a small price to pay. Her translator program was a success. “It seemed like a crazy idea at first,” says Gabriel Barrantes, who worked with Forrest on the project. “But it turned out to be sound.”

[…]

In 2004, a group of researchers led by Hovav Shacham at Stanford University in California tried this trick against a copy of the popular web-server application Apache that was running on Linux, protected with memory randomisation. It took them 216 seconds per attack to break into it. They concluded that this protection is not sufficient to stop the most persistent viruses or a single, dedicated attacker.

Last year, a group of researchers at the University of Virginia, Charlottesville, performed a similar attack on a copy of Linux whose instruction set was protected by randomised encryption. They used a slightly more complex approach, making a series of guesses about different parts of the randomisation key. This time it took over 6 minutes to force a way in: the system was tougher, but hardly invulnerable.

[…]

Knight says that randomising the encryption on the instruction set is a more powerful technique because it can use larger and more complex forms of encryption. The only limitation is that as the encryption becomes more complicated, it takes the computer longer to decrypt each instruction, and this can slow the machine down. Barrantes found that instruction-set randomisation more than doubled the length of time an instruction took to execute. Make the encryption too robust, and computer users could find themselves drumming their fingers as they wait for a web page to load.

So he thinks the best approach is to combine different types of randomisation. Where one fails, another picks up. Last year, he took a variant of Linux and randomised both its memory-space allocation and its instruction sets. In December, he put 100 copies of the software online and hired a computer-security firm to try and penetrate them. The attacks failed. In May, he repeated the experiment but this time he provided the attackers with extra information about the randomised software. Their assault still failed.

The idea was to simulate what would happen if an adversary had a phenomenal amount of money, and secret information from an inside collaborator, says Knight. The results pleased him and, he hopes, will also please DARPA when he presents them to the agency. “We aren’t claiming we can do everything, but for broad classes of attack, these techniques appear to work very well. We have no reason to believe that there would be any change if we were to try to apply this to the real world.”

EDITED TO ADD (8/2): The article is online here.

Posted on August 1, 2006 at 6:26 AM47 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.