Blog: November 2009 Archives

The Psychology of Being Scammed

This is a very interesting paper: “Understanding scam victims: seven principles for systems security,” by Frank Stajano and Paul Wilson. Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There’s no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.

The paper describes a dozen different con scenarios—entertaining in itself—and then lists and explains six general psychological principles that con artists use:

1. The distraction principle. While you are distracted by what retains your interest, hustlers can do anything to you and you won’t notice.

2. The social compliance principle. Society trains people not to question authority. Hustlers exploit this “suspension of suspiciousness” to make you do what they want.

3. The herd principle. Even suspicious marks will let their guard down when everyone next to them appears to share the same risks. Safety in numbers? Not if they’re all conspiring against you.

4. The dishonesty principle. Anything illegal you do will be used against you by the fraudster, making it harder for you to seek help once you realize you’ve been had.

5. The deception principle. Things and people are not what they seem. Hustlers know how to manipulate you to make you believe that they are.

6. The need and greed principle. Your needs and desires make you vulnerable. Once hustlers know what you really want, they can easily manipulate you.

It all makes for very good reading.

Two previous posts on the psychology of conning and being conned.

EDITED TO ADD (12/12): Some of the episodes of The Real Hustle are available on the BBC site, but only to people with UK IP addresses—or people with a VPN tunnel to the UK.

Posted on November 30, 2009 at 6:17 AM30 Comments

Fear and Public Perception

This 1996 interview with psychiatrist Robert DuPont was part of a Frontline program called “Nuclear Reaction.”

He’s talking about the role fear plays in the perception of nuclear power. It’s a lot of the sorts of things I say, but particularly interesting is this bit on familiarity and how it reduces fear:

You see, we sited these plants away from metropolitan areas to “protect the public” from the dangers of nuclear power. What we did when we did that was move the plants away from the people, so they became unfamiliar. The major health effect, adverse health effect of nuclear power is not radiation. It’s fear. And by siting them away from the people, we insured that that would be maximized. If we’re serious about health in relationship to nuclear power, we would put them in downtown, big cities, so people would see them all the time. That is really important, in terms of reducing the fear. Familiarity is the way fear is reduced. No question. It’s not done intellectually. It’s not done by reading a book. It’s done by being there and seeing it and talking to the people who work there.

So, among other reasons, terrorism is scary because it’s so rare. When it’s more common—England during the Troubles, Israel today—people have a more rational reaction to it.

My recent essay on fear and overreaction.

Posted on November 27, 2009 at 8:25 AM78 Comments

Leaked 9/11 Text Messages

Wikileaks has published pager intercepts from New York on 9/11:

WikiLeaks released half a million US national text pager intercepts. The intercepts cover a 24 hour period surrounding the September 11, 2001 attacks in New York and Washington.

[…]

Text pagers are usualy carried by persons operating in an official capacity. Messages in the archive range from Pentagon, FBI, FEMA and New York Police Department exchanges, to computers reporting faults at investment banks inside the World Trade Center.

Near as I can tell, these messages are from the commercial pager networks of Arch Wireless, Metrocall, Skytel, and Weblink Wireless, and include all customers of that service: government, corporate, and personal.

There are lots of nuggets in the data about the government response to 9/11:

One string of messages hints at how federal agencies scrambled to evacuate to Mount Weather, the government’s sort-of secret bunker buried under the Virginia mountains west of Washington, D.C. One message says, “Jim: DEPLOY TO MT. WEATHER NOW!,” and another says “CALL OFICE (sic) AS SOON AS POSSIBLE. 4145 URGENT.” That’s the phone number for the Federal Emergency Management Agency’s National Continuity Programs Directorate—which is charged with “the preservation of our constitutional form of government at all times,” even during a nuclear war. (A 2006 article in the U.K. Guardian newspaper mentioned a “a traffic jam of limos carrying Washington and government license plates” heading to Mount Weather that day.)

FEMA’s response seemed less than organized. One message at 12:37 p.m., four hours after the attacks, says: “We have no mission statements yet.” Bill Prusch, FEMA’s project officer for the National Emergency Management Information System at the time, apparently announced at 2 p.m. that the Continuity of Operations plan was activated and that certain employees should report to Mt. Weather; a few minutes later he sent out another note saying the activation was cancelled.

Historians will certainly spend a lot of time poring over the messages, but I’m more interested in where they came from in the first place:

It’s not clear how they were obtained in the first place. One possibility is that they were illegally compiled from the records of archived messages maintained by pager companies, and then eventually forwarded to WikiLeaks.

The second possibility is more likely: Over-the-air interception. Each digital pager is assigned a unique Channel Access Protocol code, or capcode, that tells it to pay attention to what immediately follows. In what amounts to a gentlemen’s agreement, no encryption is used, and properly-designed pagers politely ignore what’s not addressed to them.

But an electronic snoop lacking that same sense of etiquette might hook up a sufficiently sophisticated scanner to a Windows computer with lots of disk space—and record, without much effort, gobs and gobs of over-the-air conversations.

Existing products do precisely this. Australia’s WiPath Communications offers Interceptor 3.0 (there’s even a free download). Maryland-based SWS Security Products sells something called a “Beeper Buster” that it says let police “watch up to 2500 targets at the same time.” And if you’re frugal, there’s a video showing you how to take a $10 pager and modify it to capture everything on that network.

It’s disturbing to realize that someone, possibly not even a government, was routinely intercepting most (all?) of the pager data in lower Manhattan as far back as 2001. Who was doing it? For that purpose? That, we don’t know.

Posted on November 26, 2009 at 7:11 AM88 Comments

Virtual Mafia in Online Worlds

If you allow players in an online world to penalize each other, you open the door to extortion:

One of the features that supported user socialization in the game was the ability to declare that another user was a trusted friend. The feature involved a graphical display that showed the faces of users who had declared you trustworthy outlined in green, attached in a hub-and-spoke pattern to your face in the center.

[…]

That feature was fine as far as it went, but unlike other social networks, The Sims Online allowed users to declare other users untrustworthy too. The face of an untrustworthy user appeared circled in bright red among all the trustworthy faces in a user’s hub.

It didn’t take long for a group calling itself the Sims Mafia to figure out how to use this mechanic to shake down new users when they arrived in the game. The dialog would go something like this:

“Hi! I see from your hub that you’re new to the area. Give me all your Simoleans or my friends and I will make it impossible to rent a house.”

“What are you talking about?”

“I’m a member of the Sims Mafia, and we will all mark you as untrustworthy, turning your hub solid red (with no more room for green), and no one will play with you. You have five minutes to comply. If you think I’m kidding, look at your hub-three of us have already marked you red. Don’t worry, we’ll turn it green when you pay…”

If you think this is a fun game, think again-a typical response to this shakedown was for the user to decide that the game wasn’t worth $10 a month. Playing dollhouse doesn’t usually involve gangsters.

EDITED TO ADD (12/12): SIM Mafia existed in 2004.

Posted on November 25, 2009 at 6:36 AM44 Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PM64 Comments

Decertifying "Terrorist" Pilots

This article reads like something written by the company’s PR team.

When it comes to sleuthing these days, knowing your way within a database is as valued a skill as the classic, Sherlock Holmes-styled powers of detection.

Safe Banking Systems Software proved this very point in a demonstration of its algorithm acumen—one that resulted in a disclosure that convicted terrorists actually maintained working licenses with the U.S. Federal Aviation Administration.

The algorithm seems to be little more than matching up names and other basic info:

It used its algorithm-detection software to sift out uncommon names such as Abdelbaset Ali Elmegrahi, aka the Lockerbie bomber. It found that a number of licensed airmen all had the same P.O. box as their listed address—one that happened to be in Tripoli, Libya. These men all had working FAA certificates. And while the FAA database information investigated didn’t contain date-of-birth information, Safe Banking was able to use content on the FAA Website to determine these key details as well, to further gain a positive and clear identification of the men in question.

In any case, they found these three people with pilot’s licenses:

Elmegrahi, who had been posted on the FBI Most Wanted list for a decade and was convicted of blowing up Pan Am Flight 103, killing 259 people in 1988 over Lockerbie, Scotland. Elmegrahi was an FAA-certified aircraft dispatcher.

Re Tabib, a California resident who was convicted in 2007 for illegally exporting U.S. military aircraft parts—specifically export maintenance kits for F-14 fighter jets—to Iran. Tabib received three FAA licenses after his conviction, qualifying to be a flight instructor, ground instructor and transport pilot.

Myron Tereshchuk, who pleaded guilty to possession of a biological weapon after the FBI caught him with a brew of ricin, explosive powder and other essentials in Maryland in 2004. Tereshchuk was a licensed mechanic and student pilot.

And the article concludes with:

Suffice to say, after the FAA was made aware of these criminal histories, all three men have since been decertified.

Although I’m all for annoying international arms dealers, does anyone know the procedures for FAA decertification? Did the FAA have the legal right to do this, after being “made aware” of some information by a third party?

Of course, they don’t talk about all the false positives their system also found. How many innocents were also decertified? And they don’t mention the fact that, in the 9/11 attacks, FAA certification wasn’t really an issue. “Excuse me, young man. You can’t hijack and fly this aircraft. It says right here that the FAA decertified you.”

Posted on November 23, 2009 at 2:36 PM26 Comments

Al Qaeda Secret Code Broken

I would sure like to know more about this:

Top code-breakers at the Government Communications Headquarters in the United Kingdom have succeeded in breaking the secret language that has allowed imprisoned leaders of al-Qaida to keep in touch with other extremists in U.K. jails as well as 10,000 “sleeper agents” across the islands….

[…]

For six months, the code-breakers worked around the clock deciphering the code the three terrorists created.

Between them, the code-breakers speak all the dialects that form the basis for the code. Several of them have high-value skills in computer technology. The team worked closely with the U.S. National Security Agency and its station at Menwith Hill in the north of England. The identity of the code-breakers is so secret that not even their gender can be revealed.

“Like all good codes, the one they broke depended on substituting words, numbers or symbols for plain text. A single symbol could represent an idea or an entire message,” said an intelligence source.

The code the terrorists devised consists of words chosen from no fewer than 20 dialects from Afghanistan, Iran, Pakistan, Yemen and Sudan.

Inserted with the words ­ either before or after them ­ is local slang. The completed message is then buried in Islamic religious tracts.

EDITED TO ADD: Here’s a link to the story that still works. I didn’t realize this came from WorldNetDaily, so take it with an appropriate amount of salt.

Posted on November 23, 2009 at 7:24 AM68 Comments

Denial-of-Service Attack Against CALEA

Interesting:

The researchers say they’ve found a vulnerability in U.S. law enforcement wiretaps, if only theoretical, that would allow a surveillance target to thwart the authorities by launching what amounts to a denial-of-service (DoS) attack against the connection between the phone company switches and law enforcement.

[…]

The University of Pennsylvania researchers found the flaw after examining the telecommunication industry standard ANSI Standard J-STD-025, which addresses the transmission of wiretapped data from telecom switches to authorities, according to IDG News Service. Under the 1994 Communications Assistance for Law Enforcement Act, or Calea, telecoms are required to design their network architecture to make it easy for authorities to tap calls transmitted over digitally switched phone networks.

But the researchers, who describe their findings in a paper, found that the standard allows for very little bandwidth for the transmission of data about phone calls, which can be overwhelmed in a DoS attack. When a wiretap is enabled, the phone company’s switch establishes a 64-Kbps Call Data Channel to send data about the call to law enforcement. That paltry channel can be flooded if a target of the wiretap sends dozens of simultaneous SMS messages or makes numerous VOIP phone calls “without significant degradation of service to the targets’ actual traffic.”

As a result, the researchers say, law enforcement could lose records of whom a target called and when. The attack could also prevent the content of calls from being accurately monitored or recorded.

The paper. Comments by Matt Blaze, one of the paper’s authors.

Posted on November 20, 2009 at 6:11 AM22 Comments

A Taxonomy of Social Networking Data

At the Internet Governance Forum in Sharm El Sheikh this week, there was a conversation on social networking data. Someone made the point that there are several different types of data, and it would be useful to separate them. This is my taxonomy of social networking data.

  1. Service data. Service data is the data you need to give to a social networking site in order to use it. It might include your legal name, your age, and your credit card number.
  2. Disclosed data. This is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  3. Entrusted data. This is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data—someone else does.
  4. Incidental data. Incidental data is data the other people post about you. Again, it’s basically the same stuff as disclosed data, but the difference is that 1) you don’t have control over it, and 2) you didn’t create it in the first place.
  5. Behavioral data. This is data that the site collects about your habits by recording what you do and who you do it with.

Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.

And people should have different rights with respect to each data type. It’s clear that people should be allowed to change and delete their disclosed data. It’s less clear what rights they have for their entrusted data. And far less clear for their incidental data. If you post pictures of a party with me in them, can I demand you remove those pictures—or at least blur out my face? And what about behavioral data? It’s often a critical part of a social networking site’s business model. We often don’t mind if they use it to target advertisements, but are probably less sanguine about them selling it to third parties.

As we continue our conversations about what sorts of fundamental rights people have with respect to their data, this taxonomy will be useful.

EDITED TO ADD (12/12): Another categorization centered on destination instead of trust level.

Posted on November 19, 2009 at 12:51 PM46 Comments

Stabbing People with Stuff You Can Get Through Airport Security

Use of a pig model to demonstrate vulnerability of major neck vessels to inflicted trauma from common household items,” from the American Journal of Forensic Medical Pathology.

Abstract. Commonly available items including a ball point pen, a plastic knife, a broken wine bottle, and a broken wine glass were used to inflict stab and incised wounds to the necks of 3 previously euthanized Large White pigs. With relative ease, these items could be inserted into the necks of the pigs next to the jugular veins and carotid arteries. Despite precautions against the carrying of metal objects such as knives and nail files on board domestic and international flights, objects are still available within aircraft cabins that could be used to inflict serious and potentially life-threatening injuries. If airport and aircraft security measures are to be consistently applied, then consideration should be given to removing items such as glass bottles and glass drinking vessels. However, given the results of a relatively uncomplicated modification of a plastic knife, it may not be possible to remove all dangerous objects from aircraft. Security systems may therefore need to focus on measures such as increased surveillance of passenger behavior, rather than on attempting to eliminate every object that may serve as a potential weapon.

Posted on November 19, 2009 at 7:10 AM93 Comments

How Smart are Islamic Terrorists?

Organizational Learning and Islamic Militancy (May 2009) was written by Michael Kenney for the U.S. Department of Justice. It’s long: 146 pages. From the executive summary:

Organizational Learning and Islamic Militancy contains significant findings for counter-terrorism research and policy. Unlike existing studies, this report suggests that the relevant distinction in knowledge learned by terrorists is not between tacit and explicit knowledge, but metis and techne. Focusing on the latter sheds new insight into how terrorists acquire the experiential “know how” they need to perform their activities as opposed to abstract “know what” contained in technical bomb-making preparations. Drawing on interviews with bomb-making experts and government intelligence officials, the PI illustrates the critical difference between learning terrorism skills such as bomb-making and weapons firing by abstraction rather than by doing. Only the latter provides militants with the experiential, intuitive knowledge, in other words the metis, they need to actually build bombs, fire weapons, survey potential targets, and perform other terrorism-related activities. In making this case, the PI debunks current misconceptions regarding the Internet’s perceived role as a source of terrorism knowledge.

Another major research finding of this study is that while some Islamic militants learn, they do not learn particularly well. Much terrorism learning involves fairly routine adaptations in communications practices and targeting tactics, what organization theorists call single-loop learning or adaptation. Less common among militants are consequential changes in beliefs and values that underlie collection action or even changes in organizational goals and strategies. Even when it comes to single-loop learning, Islamic militants face significant impediments. Many terrorist conspiracies are compartmented, which makes learning difficult by impeding the free flow of information between different parts of the enterprise. Other, non-compartmented conspiracies are hindered from learning because the same people that survey targets and build bombs also carry out the attacks. Still other operations, including relatively successful ones like the Madrid bombings in 2004, are characterized by such sloppy tradecraft that investigators piece together the conspiracy quickly, preventing additional attacks and limiting militants’ ability to learn from experience.

Indeed, one of the most significant findings to emerge from this research regards the poor tradecraft and operational mistakes repeatedly committed by Islamic terrorists. Even the most “successful” operations in recent years—9/11, 3/11, and 7/7—contained basic errors in tradecraft and execution. The perpetrators that carried out these attacks were determined, adaptable (if only in a limited, tactical sense)—and surprisingly careless. The PI extracts insights from his informants that help account for terrorists’ poor tradecraft: metis in guerrilla warfare that does not translate well to urban terrorism, the difficulty of acquiring mission-critical experience when the attack or counter-terrorism response kills the perpetrators, a hostile counter-terrorism environment that makes it hard to plan and coordinate attacks or develop adequate training facilities, and perpetrators’ conviction that they don’t need to be too careful when carrying out attacks because their fate has been predetermined by Allah. The PI concludes this report by discussing some of the policy implications of these findings, suggesting that the real threat from Islamic militancy comes less from hyper-sophisticated “super terrorists” than from steadfast militants whose own dedication to the cause may undermine the cunning intelligence and fluid adaptability they need to survive.

Posted on November 18, 2009 at 1:45 PM41 Comments

Quantum Ghost Imaging

This is cool:

Ghost imaging is a technique that allows a high-resolution camera to produce an image of an object that the camera itself cannot see. It uses two sensors: one that looks at a light source and another that looks at the object. These sensors point in different directions. For example, the camera can face the sun and the light meter can face an object.

That object might be a soldier, a tank or an airplane, Ron Meyers, a laboratory quantum physicist explained during an Oct. 28 interview on the Pentagon Channel podcast “Armed with Science: Research and Applications for the Modern Military.”

Once this is done, a computer program compares and combines the patterns received from the object and the light. This creates a “ghost image,” a black-and-white or color picture of the object being photographed. The earliest ghost images were silhouettes, but current ones depict the objects more realistically.

[…]

Using virtually any light source—from a fluorescent bulb, lasers, or even the sun—quantum ghost imaging gives a clearer picture of objects by eliminating conditions such as clouds, fog and smoke beyond the ability of conventional imaging.

EDITED TO ADD (12/12): A better explanation of the effect, and a detailed paper.

Posted on November 18, 2009 at 6:22 AM30 Comments

A Useful Side-Effect of Misplaced Fear

A study in the British Journal of Criminology makes the point that drink-spiking date-raping is basically an urban legend:

Abstract. There is a stark contrast between heightened perceptions of risk associated with drug-facilitated sexual assault (DFSA) and a lack of evidence that this is a widespread threat. Through surveys and interviews with university students in the United Kingdom and United States, we explore knowledge and beliefs about drink-spiking and the linked threat of sexual assault. University students in both locations are not only widely sensitized to the issue, but substantial segments claim first- or second-hand experience of particular incidents. We explore students’ understanding of the DFSA threat in relationship to their attitudes concerning alcohol, binge-drinking, and responsibility for personal safety. We suggest that the drink-spiking narrative has a functional appeal in relation to the contemporary experience of young women’s public drinking.

In an article on the study in The Telegraph, the authors said:

Among young people, drink spiking stories have attractive features that could “help explain” their disproportionate loss of control after drinking alcohol, the study found.

Dr Burgess said: “Our findings suggest guarding against drink spiking has also become a way for women to negotiate how to watch out for each other in an environment where they might well lose control from alcohol consumption.”

[…]

“As Dr Burgess observes, it is not scientific evidence which keeps the drug rape myth alive but the fact that it serves so many useful functions.”

Basically, the hypothesis is that perpetuating the fear of drug-rape allows parents and friends to warn young women off excessive drinking without criticizing their personal choices. The fake bogeyman lets people avoid talking about the real issues.

Posted on November 17, 2009 at 5:58 AM65 Comments

Public Reactions to Terrorist Threats

Interesting research:

For the last five years we have researched the connection between times of terrorist threats and public opinion. In a series of tightly designed experiments, we expose subsets of research participants to a news story not unlike the type that aired last week. We argue that attitudes, evaluations, and behaviors change in at least three politically-relevant ways when terror threat is more prominent in the news. Some of these transformations are in accord with conventional wisdom concerning how we might expect the public to react. Others are more surprising, and more disconcerting in their implications for the quality of democracy.

One way that public opinion shifts is toward increased expressions of distrust. In some ways this strategy has been actively promoted by our political leaders. The Bush administration repeatedly reminded the public to keep eyes and ears open to help identify dangerous persons. A strategy of vigilance has also been endorsed by the new secretary of Homeland Security, Janet Napolitano.

Nonetheless, the breadth of increased distrust that the public puts into practice is striking. Individuals threatened by terrorism become less trusting of others, even their own neighbors. Other studies have shown that they become less supportive of the rights of Arab and Muslim Americans. In addition, we found that such effects extend to immigrants and, as well, to a group entirely remote from the subject of terrorism: gay Americans. The specter of terrorist threat creates ruptures in our social fabric, some of which may be justified as necessary tactics in the fight against terrorism and others that simply cannot.

Another way public opinion shifts under a terrorist threat is toward inflated evaluations of certain leaders. To look for strong leadership makes sense: crises should impel us toward leadership bold enough to confront the threat and strong enough to protect us from it. But the public does more than call for heroes in times of crisis. It projects leadership qualities onto political figures, with serious political consequences.

In studies conducted in 2004, we found that individuals threatened by terrorism perceived George W. Bush as more charismatic and stronger than did non-threatened individuals. This projection of leadership had important consequences for voting decisions. Individuals threatened by terrorism were more likely to base voting decisions on leadership qualities rather than on their own issue positions or partisanship. You did read that correctly. Threatened individuals responded with elevated evaluations of Bush’s capacity for leadership and then used those inflated evaluations as the primary determinant in their voting decision.

These findings did not just occur among Republicans, but also among Independents and Democrats. All partisan groups who perceived Bush as more charismatic were also less willing to blame him for policy failures such as faulty intelligence that led to the war in Iraq.

[…]

A third way public opinion shifts in response to terrorism is toward greater preferences for policies that protect the homeland, even at the expense of civil liberties, and active engagement against terrorists abroad. Such a strategy was advocated and implemented by the Bush administration. Again, however, we found that preferences shifted toward these objectives regardless of one’s partisan stripes and, as well, outside the U.S.

Nothing surprising here. Fear makes people deferential, docile, and distrustful, and both politicians and marketers have learned to take advantage of this.

Jennifer Merolla and Elizabeth Zechmeister have written a book, Democracy at Risk: How Terrorist Threats Affect the Public. I haven’t read it yet.

Posted on November 16, 2009 at 6:39 AM17 Comments

Bruce Schneier Action Figure

A month ago, ThatsMyFace.com approached me about making a Bruce Schneier action figure. It’s $100. I’d like to be able to say something like “half the proceeds are going to EPIC and EFF,” but they’re not. That’s the price for custom orders. I don’t even get a royalty. The company is working on lowering the price, and they’ve said that they’ll put a photograph of an actual example on the webpage. I’ve told them that at $100 no one will buy it, but at $40 it’s a funny gift for your corporate IT person. So e-mail the company if you’re interested, and if they get enough interest they’ll do a bulk order.

Posted on November 15, 2009 at 10:22 AM58 Comments

Blowfish in Fiction

The algorithm is mentioned in Von Neumann’s War, by John Ringo and Travis Taylor.

P. 495:

The guy was using a fairly simple buffer overflow attack but with a very nice little fillip of an encryption packet designed to overcome Blowfish. The point seemed to be to create a zero day exploit, which he didn’t have a chance of managing. So far, nobody had cracked Blowfish.

P. 504:

As far as he could tell, at first, it was a simple Denial of Service attack. A DoS occurred when… But this one was different. Every single packet contained some sort of cracking program … Most had dumped to the honey trap, but they were running rampant through there, while others had managed to hammer past two firewalls and were getting to his final line of defense. Somebody had managed a zero day exploit on Blowfish. And more were coming in!

Posted on November 13, 2009 at 2:43 PM30 Comments

Beyond Security Theater

[I was asked to write this essay for the New Internationalist (n. 427, November 2009, pp. 10–13). It’s nothing I haven’t said before, but I’m pleased with how this essay came together.]

Terrorism is rare, far rarer than many people think. It’s rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear. The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don’t think this way: they are far more likely to implement security theater against movie-plot threats.

A movie-plot threat is an overly specific attack scenario. Whether it’s terrorists with crop dusters, terrorists contaminating the milk supply, or terrorists attacking the Olympics, specific stories affect our emotions more intensely than mere data does. Stories are what we fear. It’s not just hypothetical stories: terrorists flying planes into buildings, terrorists with bombs in their shoes or in their water bottles, and terrorists with guns and bombs waging a co-ordinated attack against a city are even scarier movie-plot threats because they actually happened.

Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No-one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11—their guns had no bullets. The US colour-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.

To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.

Feeling and Reality

Security is both a feeling and a reality. The propensity for security theater comes from the interplay between the public and its leaders. When people are scared, they need something done that will make them feel safe, even if it doesn’t truly make them safer. Politicians naturally want to do something in response to crisis, even if that something doesn’t make any sense.

Often, this “something” is directly related to the details of a recent event: we confiscate liquids, screen shoes, and ban box cutters on airplanes. But it’s not the target and tactics of the last attack that are important, but the next attack. These measures are only effective if we happen to guess what the next terrorists are planning. If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we’ve wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we’ve wasted our money. Terrorists don’t care what they blow up and it shouldn’t be our goal merely to force the terrorists to make a minor change in their tactics or targets.

Our penchant for movie plots blinds us to the broader threats. And security theater consumes resources that could better be spent elsewhere.

Any terrorist attack is a series of events: something like planning, recruiting, funding, practising, executing, aftermath. Our most effective defenses are at the beginning and end of that process—intelligence, investigation, and emergency response—and least effective when they require us to guess the plot correctly. By intelligence and investigation, I don’t mean the broad data-mining or eavesdropping systems that have been proposed and in some cases implemented—those are also movie-plot stories without much basis in actual effectiveness—but instead the traditional “follow the evidence” type of investigation that has worked for decades.

Unfortunately for politicians, the security measures that work are largely invisible. Such measures include enhancing the intelligence-gathering abilities of the secret services, hiring cultural experts and Arabic translators, building bridges with Islamic communities both nationally and internationally, funding police capabilities—both investigative arms to prevent terrorist attacks, and emergency communications systems for after attacks occur—and arresting terrorist plotters without media fanfare. They do not include expansive new police or spying laws. Our police don’t need any new laws to deal with terrorism; rather, they need apolitical funding. These security measures don’t make good television, and they don’t help, come re-election time. But they work, addressing the reality of security instead of the feeling.

The arrest of the “liquid bombers” in London is an example: they were caught through old-fashioned intelligence and police work. Their choice of target (airplanes) and tactic (liquid explosives) didn’t matter; they would have been arrested regardless.

But even as we do all of this we cannot neglect the feeling of security, because it’s how we collectively overcome the psychological damage that terrorism causes. It’s not security theater we need, it’s direct appeals to our feelings. The best way to help people feel secure is by acting secure around them. Instead of reacting to terrorism with fear, we—and our leaders—need to react with indomitability.

Refuse to Be Terrorized

By not overreacting, by not responding to movie-plot threats, and by not becoming defensive, we demonstrate the resilience of our society, in our laws, our culture, our freedoms. There is a difference between indomitability and arrogant “bring ’em on” rhetoric. There’s a difference between accepting the inherent risk that comes with a free and open society, and hyping the threats.

We should treat terrorists like common criminals and give them all the benefits of true and open justice—not merely because it demonstrates our indomitability, but because it makes us all safer. Once a society starts circumventing its own laws, the risks to its future stability are much greater than terrorism.

Supporting real security even though it’s invisible, and demonstrating indomitability even though fear is more politically expedient, requires real courage. Demagoguery is easy. What we need is leaders willing both to do what’s right and to speak the truth.

Despite fearful rhetoric to the contrary, terrorism is not a transcendent threat. A terrorist attack cannot possibly destroy a country’s way of life; it’s only our reaction to that attack that can do that kind of damage. The more we undermine our own laws, the more we convert our buildings into fortresses, the more we reduce the freedoms and liberties at the foundation of our societies, the more we’re doing the terrorists’ job for them.

We saw some of this in the Londoners’ reaction to the 2005 transport bombings. Among the political and media hype and fearmongering, there was a thread of firm resolve. People didn’t fall victim to fear. They rode the trains and buses the next day and continued their lives. Terrorism’s goal isn’t murder; terrorism attacks the mind, using victims as a prop. By refusing to be terrorized, we deny the terrorists their primary weapon: our own fear.

Today, we can project indomitability by rolling back all the fear-based post-9/11 security measures. Our leaders have lost credibility; getting it back requires a decrease in hyperbole. Ditch the invasive mass surveillance systems and new police state-like powers. Return airport security to pre-9/11 levels. Remove swagger from our foreign policies. Show the world that our legal system is up to the challenge of terrorism. Stop telling people to report all suspicious activity; it does little but make us suspicious of each other, increasing both fear and helplessness.

Terrorism has always been rare, and for all we’ve heard about 9/11 changing the world, it’s still rare. Even 9/11 failed to kill as many people as automobiles do in the US every single month. But there’s a pervasive myth that terrorism is easy. It’s easy to imagine terrorist plots, both large-scale “poison the food supply” and small-scale “10 guys with guns and cars.” Movies and television bolster this myth, so many people are surprised that there have been so few attacks in Western cities since 9/11. Certainly intelligence and investigation successes have made it harder, but mostly it’s because terrorist attacks are actually hard. It’s hard to find willing recruits, to co-ordinate plans, and to execute those plans—and it’s easy to make mistakes.

Counterterrorism is also hard, especially when we’re psychologically prone to muck it up. Since 9/11, we’ve embarked on strategies of defending specific targets against specific tactics, overreacting to every terrorist video, stoking fear, demonizing ethnic groups, and treating the terrorists as if they were legitimate military opponents who could actually destroy a country or a way of life—all of this plays into the hands of terrorists. We’d do much better by leveraging the inherent strengths of our modern democracies and the natural advantages we have over the terrorists: our adaptability and survivability, our international network of laws and law enforcement, and the freedoms and liberties that make our society so enviable. The way we live is open enough to make terrorists rare; we are observant enough to prevent most of the terrorist plots that exist, and indomitable enough to survive the even fewer terrorist plots that actually succeed. We don’t need to pretend otherwise.

EDITED TO ADD (11/14): Commentary from Kevin Drum, James Fallows, and The Economist.

Posted on November 13, 2009 at 6:52 AM56 Comments

FBI/CIA/NSA Information Sharing Before 9/11

It’s conventional wisdom that the legal “wall” between intelligence and law enforcement was one of the reasons we failed to prevent 9/11. The 9/11 Comission evaluated that claim, and published a classified report in 2004. The report was released, with a few redactions, over the summer: “Legal Barriers to Information Sharing: The Erection of a Wall Between Intelligence and Law Enforcement Investigations,” 9/11 Commission Staff Monograph by Barbara A. Grewe, Senior Counsel for Special Projects, August 20, 2004.

The report concludes otherwise:

“The information sharing failures in the summer of 2001 were not the result of legal barriers but of the failure of individuals to understand that the barriers did not apply to the facts at hand,” the 35-page monograph concludes. “Simply put, there was no legal reason why the information could not have been shared.”

The prevailing confusion was exacerbated by numerous complicating circumstances, the monograph explains. The Foreign Intelligence Surveillance Court was growing impatient with the FBI because of repeated errors in applications for surveillance. Justice Department officials were uncomfortable requesting intelligence surveillance of persons and facilities related to Osama bin Laden since there was already a criminal investigation against bin Laden underway, which normally would have preempted FISA surveillance. Officials were reluctant to turn to the FISA Court of Review for clarification of their concerns since one of the judges on the court had expressed doubts about the constitutionality of FISA in the first place. And so on. Although not mentioned in the monograph, it probably didn’t help that public interest critics in the 1990s (myself included) were accusing the FISA Court of serving as a “rubber stamp” and indiscriminately approving requests for intelligence surveillance.

In the end, the monograph implicitly suggests that if the law was not the problem, then changing the law may not be the solution.

James Bamford comes to much the same conclusion in his book, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America: there was no legal wall that prevented intelligence and law enforcement from sharing the information necessary to prevent 9/11; it was inter-agency rivalries and turf battles.

Posted on November 12, 2009 at 2:26 PM32 Comments

Security in a Reputation Economy

In the past, our relationship with our computers was technical. We cared what CPU they had and what software they ran. We understood our networks and how they worked. We were experts, or we depended on someone else for expertise. And security was part of that expertise.

This is changing. We access our email via the web, from any computer or from our phones. We use Facebook, Google Docs, even our corporate networks, regardless of hardware or network. We, especially the younger of us, no longer care about the technical details. Computing is infrastructure; it’s a commodity. It’s less about products and more about services; we simply expect it to work, like telephone service or electricity or a transportation network.

Infrastructures can be spread on a broad continuum, ranging from generic to highly specialized. Power and water are generic; who supplies them doesn’t really matter. Mobile phone services, credit cards, ISPs, and airlines are mostly generic. More specialized infrastructure services are restaurant meals, haircuts, and social networking sites. Highly specialized services include tax preparation for complex businesses; management consulting, legal services, and medical services.

Sales for these services are driven by two things: price and trust. The more generic the service is, the more price dominates. The more specialized it is, the more trust dominates. IT is something of a special case because so much of it is free. So, for both specialized IT services where price is less important and for generic IT services—think Facebook—where there is no price, trust will grow in importance. IT is becoming a reputation-based economy, and this has interesting ramifications for security.

Some years ago, the major credit card companies became concerned about the plethora of credit-card-number thefts from sellers’ databases. They worried that these might undermine the public’s trust in credit cards as a secure payment system for the internet. They knew the sellers would only protect these databases up to the level of the threat to the seller, and not to the greater level of threat to the industry as a whole. So they banded together and produced a security standard called PCI. It’s wholly industry-enforced ­ by an industry that realized its reputation was more valuable than the sellers’ databases.

A reputation-based economy means that infrastructure providers care more about security than their customers do. I realized this 10 years ago with my own company. We provided network-monitoring services to large corporations, and our internal network security was much more extensive than our customers’. Our customers secured their networks—that’s why they hired us, after all—but only up to the value of their networks. If we mishandled any of our customers’ data, we would have lost the trust of all of our customers.

I heard the same story at an ENISA conference in London last June, when an IT consultant explained that he had begun encrypting his laptop years before his customers did. While his customers might decide that the risk of losing their data wasn’t worth the hassle of dealing with encryption, he knew that if he lost data from one customer, he risked losing all of his customers.

As IT becomes more like infrastructure, more like a commodity, expect service providers to improve security to levels greater than their customers would have done themselves.

In IT, customers learn about company reputation from many sources: magazine articles, analyst reviews, recommendations from colleagues, awards, certifications, and so on. Of course, this only works if customers have accurate information. In a reputation economy, companies have a motivation to hide their security problems.

You’ve all experienced a reputation economy: restaurants. Some restaurants have a good reputation, and are filled with regulars. When restaurants get a bad reputation, people stop coming and they close. Tourist restaurants—whose main attraction is their location, and whose customers frequently don’t know anything about their reputation—can thrive even if they aren’t any good. And sometimes a restaurant can keep its reputation—an award in a magazine, a special occasion restaurant that “everyone knows” is the place to go—long after its food and service have declined.

The reputation economy is far from perfect.

This essay originally appeared in The Guardian.

Posted on November 12, 2009 at 6:30 AM31 Comments

Hacking the Brazil Power Grid

We’ve seen lots of rumors about attacks against the power grid, both in the U.S. and elsewhere, of people hacking the power grid. President Obama mentioned it in his May cybersecurity speech: “In other countries cyberattacks have plunged entire cities into darkness.” Seems like the source of these rumors has been Brazil:

Several prominent intelligence sources confirmed that there were a series of cyber attacks in Brazil: one north of Rio de Janeiro in January 2005 that affected three cities and tens of thousands of people, and another, much larger event beginning on Sept. 26, 2007.

That one in the state of Espirito Santo affected more than three million people in dozens of cities over a two-day period, causing major disruptions. In Vitoria, the world’s largest iron ore producer had seven plants knocked offline, costing the company $7 million. It is not clear who did it or what the motive was.

60 Minutes called me during the research of this story. They had a lot more unsubstantiated information than they’re provided here: names of groups that were involved, allegations of extortion, government coverups, and so on. It would be nice to know what really happened.

EDITED TO ADD (11/11): Wired says that the attacks were caused by sooty insulators. The counterargument, of course, is that sooty insulators are just the cover story because the whole hacker thing is secret.

Wired also mentions that, in an interview last month, Richard Clarke named Brazil as a victim of these attacks.

Posted on November 11, 2009 at 12:19 PM38 Comments

Protecting OSs from RootKits

Interesting research: “Countering Kernel Rootkits with Lightweight Hook Protection,” by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.

Abstract: Kernel rootkits have posed serious security threats due to their stealthy manner. To hide their presence and activities, many rootkits hijack control flows by modifying control data or hooks in the kernel space. A critical step towards eliminating rootkits is to protect such hooks from being hijacked. However, it remains a challenge because there exist a large number of widely-scattered kernel hooks and many of them could be dynamically allocated from kernel heap and co-located together with other kernel data. In addition, there is a lack of flexible commodity hardware support, leading to the socalled protection granularity gap ­ kernel hook protection requires byte-level granularity but commodity hardware only provides pagelevel protection.

To address the above challenges, in this paper, we present HookSafe, a hypervisor-based lightweight system that can protect thousands of kernel hooks in a guest OS from being hijacked. One key observation behind our approach is that a kernel hook, once initialized, may be frequently “read”-accessed, but rarely “write”-accessed. As such, we can relocate those kernel hooks to a dedicated page-aligned memory space and then regulate accesses to them with hardware-based page-level protection. We have developed a prototype of HookSafe and used it to protect more than 5, 900 kernel hooks in a Linux guest. Our experiments with nine real-world rootkits show that HookSafe can effectively defeat their attempts to hijack kernel hooks. We also show that HookSafe achieves such a large-scale protection with a small overhead (e.g., around 6% slowdown in performance benchmarks).

The research will be presented at the 16th ACM Conference on Computer and Communications Security this week. Here’s an article on the research.

Posted on November 10, 2009 at 1:26 PM14 Comments

Is Antivirus Dead?

This essay previously appeared in Information Security Magazine, as the second half of a point-counterpoint with Marcus Ranum. You can read his half here as well.

Security is never black and white. If someone asks, “for best security, should I do A or B?” the answer almost invariably is both. But security is always a trade-off. Often it’s impossible to do both A and B—there’s no time to do both, it’s too expensive to do both, or whatever—and you have to choose. In that case, you look at A and B and you make you best choice. But it’s almost always more secure to do both.

Yes, antivirus programs have been getting less effective as new viruses are more frequent and existing viruses mutate faster. Yes, antivirus companies are forever playing catch-up, trying to create signatures for new viruses. Yes, signature-based antivirus software won’t protect you when a virus is new, before the signature is added to the detection program. Antivirus is by no means a panacea.

On the other hand, an antivirus program with up-to-date signatures will protect you from a lot of threats. It’ll protect you against viruses, against spyware, against Trojans—against all sorts of malware. It’ll run in the background, automatically, and you won’t notice any performance degradation at all. And—here’s the best part—it can be free. AVG won’t cost you a penny. To me, this is an easy trade-off, certainly for the average computer user who clicks on attachments he probably shouldn’t click on, downloads things he probably shouldn’t download, and doesn’t understand the finer workings of Windows Personal Firewall.

Certainly security would be improved if people used whitelisting programs such as Bit9 Parity and Savant Protection—and I personally recommend Malwarebytes’ Anti-Malware—but a lot of users are going to have trouble with this. The average user will probably just swat away the “you’re trying to run a program not on your whitelist” warning message or—even worse—wonder why his computer is broken when he tries to run a new piece of software. The average corporate IT department doesn’t have a good idea of what software is running on all the computers within the corporation, and doesn’t want the administrative overhead of managing all the change requests. And whitelists aren’t a panacea, either: they don’t defend against malware that attaches itself to data files (think Word macro viruses), for example.

One of the newest trends in IT is consumerization, and if you don’t already know about it, you soon will. It’s the idea that new technologies, the cool stuff people want, will become available for the consumer market before they become available for the business market. What it means to business is that people—employees, customers, partners—will access business networks from wherever they happen to be, with whatever hardware and software they have. Maybe it’ll be the computer you gave them when you hired them. Maybe it’ll be their home computer, the one their kids use. Maybe it’ll be their cell phone or PDA, or a computer in a hotel’s business center. Your business will have no way to know what they’re using, and—more importantly—you’ll have no control.

In this kind of environment, computers are going to connect to each other without a whole lot of trust between them. Untrusted computers are going to connect to untrusted networks. Trusted computers are going to connect to untrusted networks. The whole idea of “safe computing” is going to take on a whole new meaning—every man for himself. A corporate network is going to need a simple, dumb, signature-based antivirus product at the gateway of its network. And a user is going to need a similar program to protect his computer.

Bottom line: antivirus software is neither necessary nor sufficient for security, but it’s still a good idea. It’s not a panacea that magically makes you safe, nor is it is obsolete in the face of current threats. As countermeasures go, it’s cheap, it’s easy, and it’s effective. I haven’t dumped my antivirus program, and I have no intention of doing so anytime soon.

Posted on November 10, 2009 at 6:31 AM98 Comments

John Mueller on Zazi

I have refrained from commenting on the case against Najibullah Zazi, simply because it’s so often the case that the details reported in the press have very little do with reality. My suspicion was, that as in in so many other cases, he was an idiot who couldn’t do any real harm and was turned into a bogeyman for political purposes.

However, John Mueller—who I’ve written about before—has done the research:

Recalls his step-uncle affectionately, Zazi is “a dumb kid, believe me.” A high school dropout, Zazi mostly worked as doughnut peddler in Lower Manhattan, barely making a living. Somewhere along the line, it is alleged, he took it into his head to set off a bomb and traveled to Pakistan where he received explosives training from al-Qaeda and copied nine pages of chemical bombmaking instructions onto his laptop. FBI Director Robert Mueller asserted in testimony on September 30 that this training gave Zazi the “capability” to set off a bomb.

That, however, seems to be a substantial overstatement—not unlike the Director’s 2003 testimony assuring us that, although his agency had yet to identify an al-Qaeda cell in the U.S., such unidentified entities nonetheless presented “the greatest threat,” had “developed a support infrastructure” in the country, and were able and intended to inflict “significant casualties in the US with little warning.”

An overstatement because, upon returning to the United States, Zazi allegedly spent the better part of a year trying to concoct the bomb he had supposedly learned how to make. In the process, he, or some confederates, purchased bomb materials using stolen credit cards, a bone-headed maneuver guaranteeing that red flags would go up about the sale and that surveillance videos in the stores would be maintained rather than routinely erased.

However, even with the material at hand, Zazi still apparently couldn’t figure it out, and he frantically contacted an unidentified person for help several times. Each of these communications was “more urgent in tone than the last,” according to court documents.

Clearly, if Zazi was able eventually to bring his alleged aspirations to fruition, he could have done some damage, though, given his capacities, the person most in existential danger was surely the lapsed doughnut peddler himself.

As I said in 2007:

Terrorism is a real threat, and one that needs to be addressed by appropriate means. But allowing ourselves to be terrorized by wannabe terrorists and unrealistic plots—and worse, allowing our essential freedoms to be lost by using them as an excuse—is wrong.

[…]

I’ll be the first to admit that I don’t have all the facts in any of these cases. None of us do. So let’s have some healthy skepticism. Skepticism when we read about these terrorist masterminds who were poised to kill thousands of people and do incalculable damage. Skepticism when we’re told that their arrest proves that we need to give away our own freedoms and liberties. And skepticism that those arrested are even guilty in the first place.

The problem with these arrests is that the crimes have not happened yet. So these cases involve trying to divine what people will do in the future. They involve trying to guess as to people’s motives and abilities. They often involve informants with questionable integrity, and my worry is that in our zeal to prevent terrorism, we create terrorists where there weren’t any to begin with.

Mueller writes:

It follows that any terrorism problem within the United States principally derives from homegrown people like Zazi, often isolated from each other, who fantasize about performing dire deeds. Penn State’s Michael Kenney has interviewed dozens of officials and intelligence agents and analyzed court documents, and finds homegrown Islamic militants to be operationally unsophisticated, short on know-how, prone to make mistakes, poor at planning, and severely hampered by a limited capacity to learn. Another study documents the difficulties of network coordination that continually threaten operational unity, trust, cohesion, and the ability to act collectively. And the popular notion these characters have the capacity to steal or put together an atomic bomb seems, to put it mildly, as fanciful as some of the terrorists’ schemes.

By contrast, the image projected by the Department of Homeland Security continues to be of an enemy that is “relentless, patient, opportunistic, and flexible,” shows “an understanding of the potential consequence of carefully planned attacks on economic transportation, and symbolic targets,” seriously threatens “national security,” and could inflict “mass casualties, weaken the economy, and damage public morale and confidence.” That description may fit some terrorists—the 9/11 hijackers among them. But not the vast majority, including the hapless Zazi.

EDITED TO ADD (11/9): This is the Michael Kenney paper that Mueller cites.

Posted on November 9, 2009 at 12:15 PM36 Comments

Laissez-Faire Access Control

Recently I wrote about the difficulty of making role-based access control work, and how reasearch at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, “Laissez-Faire File Sharing,” tries to formalize the sort of access control.

Abstract: When organizations deploy file systems with access control mechanisms that prevent users from reliably sharing files with others, these users will inevitably find alternative means to share. Alas, these alternatives rarely provide the same level of confidentiality, integrity, or auditability provided by the prescribed file systems. Thus, the imposition of restrictive mechanisms and policies by system designers and administrators may actually reduce the system’s security.

We observe that the failure modes of file systems that enforce centrally-imposed access control policies are similar to the failure modes of centrally-planned economies: individuals either learn to circumvent these restrictions as matters of necessity or desert the system entirely, subverting the goals behind the central policy.

We formalize requirements for laissez-faire sharing, which parallel the requirements of free market economies, to better address the file sharing needs of information workers. Because individuals are less likely to feel compelled to circumvent systems that meet these laissez-faire requirements, such systems have the potential to increase both productivity and security.

Think of Wikipedia as the ultimate example of this. Everybody has access to everything, but there are audit mechanisms in place to prevent abuse.

Posted on November 9, 2009 at 6:59 AM39 Comments

The Doghouse: ADE 651

A divining rod to find explosives in Iraq:

ATSC’s promotional material claims that its device can find guns, ammunition, drugs, truffles, human bodies and even contraband ivory at distances up to a kilometer, underground, through walls, underwater or even from airplanes three miles high. The device works on “electrostatic magnetic ion attraction,” ATSC says.

To detect materials, the operator puts an array of plastic-coated cardboard cards with bar codes into a holder connected to the wand by a cable. “It would be laughable,” Colonel Bidlack said, “except someone down the street from you is counting on this to keep bombs off the streets.”

Proponents of the wand often argue that errors stem from the human operator, who they say must be rested, with a steady pulse and body temperature, before using the device.

Then the operator must walk in place a few moments to “charge” the device, since it has no battery or other power source, and walk with the wand at right angles to the body. If there are explosives or drugs to the operator’s left, the wand is supposed to swivel to the operator’s left and point at them.

If, as often happens, no explosives or weapons are found, the police may blame a false positive on other things found in the car, like perfume, air fresheners or gold fillings in the driver’s teeth.

Complete quackery, sold by Cumberland Industries:

Still, the Iraqi government has purchased more than 1,500 of the devices, known as the ADE 651, at costs from $16,500 to $60,000 each. Nearly every police checkpoint, and many Iraqi military checkpoints, have one of the devices, which are now normally used in place of physical inspections of vehicles.

James Randi says:

This Foundation will give you our million-dollar prize upon the successful testing of the ADE651® device. Such test can be performed by anyone, anywhere, under your conditions, by you or by any appointed person or persons, in direct satisfaction of any or all of the provisions laid out above by you.

No one will respond to this, because the ADE651® is a useless, quack, device which cannot perform any other function than separating naïve persons from their money. It’s a fake, a scam, a swindle, and a blatant fraud. The manufacturers, distributors, vendors, advertisers, and retailers of the ADE651® device are criminals, liars, and thieves who will ignore this challenge because they know the device, the theory, the described principles of operation, and the technical descriptions given, are nonsense, lies, and fraudulent.

And he quotes from the Cumberland Industries literature (not online, unfortunately):

Ignores All Known Concealment Methods. By programming the detection cards to specifically target a particular substance, (through the proprietary process of electro-static matching of the ionic charge and structure of the substance), the ADE651® will “by-pass” all known attempts to conceal the target substance. It has been shown to penetrate Lead, other metals, concrete, and other matter (including hiding in the body) used in attempts to block the attraction.

No Consumables nor Maintenance Contracts Required. Unlike Trace Detectors that require the supply of sample traps, the ADE651® does not utilize any consumables (exceptions include: cotton-gloves and cleanser) thereby reducing the operational costs of the equipment. The equipment is Operator maintained and requires no ongoing maintenance service contracts. It comes with a hardware three year warranty. Since the equipment is powered electro statically, there are no batteries or conventional power supplies to change or maintain.

One interesting point is that the effectiveness of this device depends strongly on what the bad guys think about its effectiveness. If the bad guys think it works, they have to find someone who is 1) willing to kill himself, and 2) rational enough to keep his cool while being tested by one of these things. I’ll bet that the ADE651 makes it harder to recruit suicide bombers.

But what happened to the days when you could buy a divining rod for $100?

EDITED TO ADD (11/11): In case the company pulls the spec sheet, it’s archived here.

Posted on November 6, 2009 at 6:55 AM

Mossad Hacked Syrian Official's Computer

It was unattended in a hotel room at the time:

Israel’s Mossad espionage agency used Trojan Horse programs to gather intelligence about a nuclear facility in Syria the Israel Defense Forces destroyed in 2007, the German magazine Der Spiegel reported Monday.

According to the magazine, Mossad agents in London planted the malware on the computer of a Syrian official who was staying in the British capital; he was at a hotel in the upscale neighborhood of Kensington at the time.

The program copied the details of Syria’s illicit nuclear program and sent them directly to the Mossad agents’ computers, the report said.

Remember the evil maid attack: if an attacker gets hold of your computer temporarily, he can bypass your encryption software.

Posted on November 5, 2009 at 12:48 PM22 Comments

The Problems with Unscientific Security

From the Open Access Journal of Forensic Psychology, by a whole list of authors: “A Call for Evidence-Based Security Tools“:

Abstract: Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning.

We recommend two important changes to improve the (cost) effectiveness of security policy. To begin with, the emphasis of deception research should shift from technological to behavioural sciences. Secondly, the burden of proof should lie with the manufacturers of the security tools. Governments should not rely on security tools that have not passed scientific scrutiny, and should only employ those methods that have been proven effective. After all, the use of tools that do not work will only get us further from the truth.

One excerpt:

In absence of systematic research, users will base their evaluation on data generated by field use. Because people tend to follow heuristics rather than the rules of probability theory, perceived effectiveness can substantially differ from true effectiveness (Tversky & Kahneman, 1973). For example, one well-known problem associated with field studies is that of selective feedback. Investigative authorities are unlikely to receive feedback from liars who are erroneously considered truthful. They will occasionally receive feedback when correctly detecting deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008). The perceived effectiveness that follows from this can be further reinforced through confirmation bias: Evidence confirming one’s preconception is weighted more heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result, even techniques that perform at chance level may be perceived as highly effective (Iacono, 1991). This unwarranted confidence can have profound effects on citizens’ safety and civil liberty: Criminals may escape detection while innocents may be falsely accused. The Innocence Project (Unvalidated or improper science, no date) demonstrates that unvalidated or improper forensic science can indeed lead to wrongful convictions (see also Saks & Koehler, 2005).

Article on the paper.

Posted on November 5, 2009 at 6:11 AM33 Comments

Fear and Overreaction

It’s hard work being prey. Watch the birds at a feeder. They’re constantly on alert, and will fly away from food—from easy nutrition—at the slightest movement or sound. Given that I’ve never, ever seen a bird plucked from a feeder by a predator, it seems like a whole lot of wasted effort against not very big a threat.

Assessing and reacting to risk is one of the most important things a living creature has to deal with. The amygdala, an ancient part of the brain that first evolved in primitive fishes, has that job. It’s what’s responsible for the fight-or-flight reflex. Adrenaline in the bloodstream, increased heart rate, increased muscle tension, sweaty palms; that’s the amygdala in action. And it works fast, faster than consciousnesses: show someone a snake and their amygdala will react before their conscious brain registers that they’re looking at a snake.

Fear motivates all sorts of animal behaviors. Schooling, flocking, and herding are all security measures. Not only is it less likely that any member of the group will be eaten, but each member of the group has to spend less time watching out for predators. Animals as diverse as bumblebees and monkeys both avoid food in areas where predators are common. Different prey species have developed various alarm calls, some surprisingly specific. And some prey species have even evolved to react to the alarms given off by other species.

Evolutionary biologist Randolph Nesse has studied animal defenses, particularly those that seem to be overreactions. These defenses are mostly all-or-nothing; a creature can’t do them halfway. Birds flying off, sea cucumbers expelling their stomachs, and vomiting are all examples. Using signal detection theory, Nesse showed that all-or-nothing defenses are expected to have many false alarms. “The smoke detector principle shows that the overresponsiveness of many defenses is an illusion. The defenses appear overresponsive because they are ‘inexpensive’ compared to the harms they protect against and because errors of too little defense are often more costly than errors of too much defense.”

So according to the theory, if flight costs 100 calories, both in flying and lost eating time, and there’s a 1 in 100 chance of being eaten if you don’t fly away, it’s smarter for survival to use up 10,000 calories repeatedly flying at the slightest movement even though there’s a 99 percent false alarm rate. Whatever the numbers happen to be for a particular species, it has evolved to get the trade-off right.

This makes sense, until the conditions that the species evolved under change quicker than evolution can react to. Even though there are far fewer predators in the city, birds at my feeder react as if they were in the primal forest. Even birds safe in a zoo’s aviary don’t realize that the situation has changed.

Humans are both no different and very different. We, too, feel fear and react with our amygdala, but we also have a conscious brain that can override those reactions. And we too live in a world very different from the one we evolved in. Our reflexive defenses might be optimized for the risks endemic to living in small family groups in the East African highlands in 100,000 BC, not 2009 New York City. But we can go beyond fear, and actually think sensibly about security.

Far too often, we don’t. We tend to be poor judges of risk. We overreact to rare risks, we ignore long-term risks, we magnify risks that are also morally offensive. We get risks wrongthreats, probabilities, and costs—all the time. When we’re afraid, really afraid, we’ll do almost anything to make that fear go away. Both politicians and marketers have learned to push that fear button to get us to do what they want.

One night last month, I was awakened from my hotel-room sleep by a loud, piercing alarm. There was no way I could ignore it, but I weighed the risks and did what any reasonable person would do under the circumstances: I stayed in bed and waited for the alarm to be turned off. No point getting dressed, walking down ten flights of stairs, and going outside into the cold for what invariably would be a false alarm—serious hotel fires are very rare. Unlike the bird in an aviary, I knew better.

You can disagree with my risk calculus, and I’m sure many hotel guests walked downstairs and outside to the designated assembly point. But it’s important to recognize that the ability to have this sort of discussion is uniquely human. And we need to have the discussion repeatedly, whether the topic is the installation of a home burglar alarm, the latest TSA security measures, or the potential military invasion of another country. These things aren’t part of our evolutionary history; we have no natural sense of how to respond to them. Our fears are often calibrated wrong, and reason is the only way we can override them.

This essay first appeared on DarkReading.com.

Posted on November 4, 2009 at 7:12 AM64 Comments

Zero-Tolerance Policies

Recent stories have documented the ridiculous effects of zero-tolerance weapons policies in a Delaware school district: a first-grader expelled for taking a camping utensil to school, a 13-year-old expelled after another student dropped a pocketknife in his lap, and a seventh-grader expelled for cutting paper with a utility knife for a class project. Where’s the common sense? the editorials cry.

These so-called zero-tolerance policies are actually zero-discretion policies. They’re policies that must be followed, no situational discretion allowed. We encounter them whenever we go through airport security: no liquids, gels or aerosols. Some workplaces have them for sexual harassment incidents; in some sports a banned substance found in a urine sample means suspension, even if it’s for a real medical condition. Judges have zero discretion when faced with mandatory sentencing laws: three strikes for drug offences and you go to jail, mandatory sentencing for statutory rape (underage sex), etc. A national restaurant chain won’t serve hamburgers rare, even if you offer to sign a waiver. Whenever you hear "that’s the rule, and I can’t do anything about it"—and they’re not lying to get rid of you—you’re butting against a zero discretion policy.

These policies enrage us because they are blind to circumstance. Editorial after editorial denounced the suspensions of elementary school children for offenses that anyone with any common sense would agree were accidental and harmless. The Internet is filled with essays demonstrating how the TSA’s rules are nonsensical and sometimes don’t even improve security. I’ve written some of them. What we want is for those involved in the situations to have discretion.

However, problems with discretion were the reason behind these mandatory policies in the first place. Discretion is often applied inconsistently. One school principal might deal with knives in the classroom one way, and another principal another way. Your drug sentence could depend considerably on how sympathetic your judge is, or on whether she’s having a bad day.

Even worse, discretion can lead to discrimination. Schools had weapons bans before zero-tolerance policies, but teachers and administrators enforced the rules disproportionally against African-American students. Criminal sentences varied by race, too. The benefit of zero-discretion rules and laws is that they ensure that everyone is treated equally.

Zero-discretion rules also protect against lawsuits. If the rules are applied consistently, no parent, air traveler or defendant can claim he was unfairly discriminated against.

So that’s the choice. Either we want the rules enforced fairly across the board, which means limiting the discretion of the enforcers at the scene at the time, or we want a more nuanced response to whatever the situation is, which means we give those involved in the situation more discretion.

Of course, there’s more to it than that. The problem with the zero-tolerance weapons rules isn’t that they’re rigid, it’s that they’re poorly written.

What constitutes a weapon? Is it any knife, no matter how small? Should the penalties be the same for a first grader and a high school student? Does intent matter? When an aspirin carried for menstrual cramps becomes “drug possession,” you know there’s a badly written rule in effect.

It’s the same with airport security and criminal sentencing. Broad and simple rules may be simpler to follow—and require less thinking on the part of those enforcing them—but they’re almost always far less nuanced than our complex society requires. Unfortunately, the more complex the rules are, the more they’re open to interpretation and the more discretion the interpreters have.

The solution is to combine the two, rules and discretion, with procedures to make sure they’re not abused. Provide rules, but don’t make them so rigid that there’s no room for interpretation. Give the people in the situation—the teachers, the airport security agents, the policemen, the judges—discretion to apply the rules to the situation. But—and this is the important part—allow people to appeal the results if they feel they were treated unfairly. And regularly audit the results to ensure there is no discrimination or favoritism. It’s the combination of the four that work: rules plus discretion plus appeal plus audit.

All systems need some form of redress, whether it be open and public like a courtroom or closed and secret like the TSA. Giving discretion to those at the scene just makes for a more efficient appeals process, since the first level of appeal can be handled on the spot.

Zachary, the Delaware first grader suspended for bringing a combination fork, spoon and knife camping utensil to eat his lunch with, had his punishment unanimously overturned by the school board. This was the right decision; but what about all the other students whose parents weren’t as forceful or media-savvy enough to turn their child’s plight into a national story? Common sense in applying rules is important, but so is equal access to that common sense.

This essay originally appeared on the Minnesota Public Radio website.

EDITED TO ADD (11/11): Another example:

A former soldier who handed a discarded shotgun in to police faces at least five years imprisonment for “doing his duty.”

Posted on November 3, 2009 at 11:17 AM49 Comments

Detecting Terrorists by Smelling Fear

Really:

The technology relies on recognising a pheromone – or scent signal – produced in sweat when a person is scared.

Researchers hope the ”fear detector” will make it possible to identify individuals at check points who are up to no good.

Terrorists with murder in mind, drug smugglers, or criminals on the run are likely to be very fearful of being discovered.

Seems like yet another technology that will be swamped with false positives.

And is there any justification to the hypothesis that terrorists will be more afraid than anyone else? And do we know why people tend to feel fear? Is it because they’re up to no good, or because of more benign reasons—like they’re scared of something? This link from emotion to intent is very tenuous.

Posted on November 3, 2009 at 6:12 AM75 Comments

The FBI and Wiretaps

To aid their Wall Street investigations, the FBI used DCSNet, their massive surveillance system.

Prosecutors are using the FBI’s massive surveillance system, DCSNet, which stands for Digital Collection System Network. According to Wired magazine, this system connects FBI wiretapping rooms to switches controlled by traditional land-line operators, internet-telephony providers and cellular companies. It can be used to instantly wiretap almost any communications device in the U.S.—wireless or tethered. In other words, you and I have no privacy. The government can listen in on any call made in the continental U.S. (This is all well and good if you trust every government employee. But what if an attorney general running for higher office will do anything to finger a high-profile target? Or what if a prosecutor has a personal grudge he’d like to fulfill? It seems to me it would be easy for this power to fall into the wrong hands.)

Posted on November 2, 2009 at 8:57 AM33 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.