Entries Tagged "false positives"

Page 4 of 13

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Decertifying "Terrorist" Pilots

This article reads like something written by the company’s PR team.

When it comes to sleuthing these days, knowing your way within a database is as valued a skill as the classic, Sherlock Holmes-styled powers of detection.

Safe Banking Systems Software proved this very point in a demonstration of its algorithm acumen—one that resulted in a disclosure that convicted terrorists actually maintained working licenses with the U.S. Federal Aviation Administration.

The algorithm seems to be little more than matching up names and other basic info:

It used its algorithm-detection software to sift out uncommon names such as Abdelbaset Ali Elmegrahi, aka the Lockerbie bomber. It found that a number of licensed airmen all had the same P.O. box as their listed address—one that happened to be in Tripoli, Libya. These men all had working FAA certificates. And while the FAA database information investigated didn’t contain date-of-birth information, Safe Banking was able to use content on the FAA Website to determine these key details as well, to further gain a positive and clear identification of the men in question.

In any case, they found these three people with pilot’s licenses:

Elmegrahi, who had been posted on the FBI Most Wanted list for a decade and was convicted of blowing up Pan Am Flight 103, killing 259 people in 1988 over Lockerbie, Scotland. Elmegrahi was an FAA-certified aircraft dispatcher.

Re Tabib, a California resident who was convicted in 2007 for illegally exporting U.S. military aircraft parts—specifically export maintenance kits for F-14 fighter jets—to Iran. Tabib received three FAA licenses after his conviction, qualifying to be a flight instructor, ground instructor and transport pilot.

Myron Tereshchuk, who pleaded guilty to possession of a biological weapon after the FBI caught him with a brew of ricin, explosive powder and other essentials in Maryland in 2004. Tereshchuk was a licensed mechanic and student pilot.

And the article concludes with:

Suffice to say, after the FAA was made aware of these criminal histories, all three men have since been decertified.

Although I’m all for annoying international arms dealers, does anyone know the procedures for FAA decertification? Did the FAA have the legal right to do this, after being “made aware” of some information by a third party?

Of course, they don’t talk about all the false positives their system also found. How many innocents were also decertified? And they don’t mention the fact that, in the 9/11 attacks, FAA certification wasn’t really an issue. “Excuse me, young man. You can’t hijack and fly this aircraft. It says right here that the FAA decertified you.”

Posted on November 23, 2009 at 2:36 PMView Comments

Fear and Overreaction

It’s hard work being prey. Watch the birds at a feeder. They’re constantly on alert, and will fly away from food—from easy nutrition—at the slightest movement or sound. Given that I’ve never, ever seen a bird plucked from a feeder by a predator, it seems like a whole lot of wasted effort against not very big a threat.

Assessing and reacting to risk is one of the most important things a living creature has to deal with. The amygdala, an ancient part of the brain that first evolved in primitive fishes, has that job. It’s what’s responsible for the fight-or-flight reflex. Adrenaline in the bloodstream, increased heart rate, increased muscle tension, sweaty palms; that’s the amygdala in action. And it works fast, faster than consciousnesses: show someone a snake and their amygdala will react before their conscious brain registers that they’re looking at a snake.

Fear motivates all sorts of animal behaviors. Schooling, flocking, and herding are all security measures. Not only is it less likely that any member of the group will be eaten, but each member of the group has to spend less time watching out for predators. Animals as diverse as bumblebees and monkeys both avoid food in areas where predators are common. Different prey species have developed various alarm calls, some surprisingly specific. And some prey species have even evolved to react to the alarms given off by other species.

Evolutionary biologist Randolph Nesse has studied animal defenses, particularly those that seem to be overreactions. These defenses are mostly all-or-nothing; a creature can’t do them halfway. Birds flying off, sea cucumbers expelling their stomachs, and vomiting are all examples. Using signal detection theory, Nesse showed that all-or-nothing defenses are expected to have many false alarms. “The smoke detector principle shows that the overresponsiveness of many defenses is an illusion. The defenses appear overresponsive because they are ‘inexpensive’ compared to the harms they protect against and because errors of too little defense are often more costly than errors of too much defense.”

So according to the theory, if flight costs 100 calories, both in flying and lost eating time, and there’s a 1 in 100 chance of being eaten if you don’t fly away, it’s smarter for survival to use up 10,000 calories repeatedly flying at the slightest movement even though there’s a 99 percent false alarm rate. Whatever the numbers happen to be for a particular species, it has evolved to get the trade-off right.

This makes sense, until the conditions that the species evolved under change quicker than evolution can react to. Even though there are far fewer predators in the city, birds at my feeder react as if they were in the primal forest. Even birds safe in a zoo’s aviary don’t realize that the situation has changed.

Humans are both no different and very different. We, too, feel fear and react with our amygdala, but we also have a conscious brain that can override those reactions. And we too live in a world very different from the one we evolved in. Our reflexive defenses might be optimized for the risks endemic to living in small family groups in the East African highlands in 100,000 BC, not 2009 New York City. But we can go beyond fear, and actually think sensibly about security.

Far too often, we don’t. We tend to be poor judges of risk. We overreact to rare risks, we ignore long-term risks, we magnify risks that are also morally offensive. We get risks wrongthreats, probabilities, and costs—all the time. When we’re afraid, really afraid, we’ll do almost anything to make that fear go away. Both politicians and marketers have learned to push that fear button to get us to do what they want.

One night last month, I was awakened from my hotel-room sleep by a loud, piercing alarm. There was no way I could ignore it, but I weighed the risks and did what any reasonable person would do under the circumstances: I stayed in bed and waited for the alarm to be turned off. No point getting dressed, walking down ten flights of stairs, and going outside into the cold for what invariably would be a false alarm—serious hotel fires are very rare. Unlike the bird in an aviary, I knew better.

You can disagree with my risk calculus, and I’m sure many hotel guests walked downstairs and outside to the designated assembly point. But it’s important to recognize that the ability to have this sort of discussion is uniquely human. And we need to have the discussion repeatedly, whether the topic is the installation of a home burglar alarm, the latest TSA security measures, or the potential military invasion of another country. These things aren’t part of our evolutionary history; we have no natural sense of how to respond to them. Our fears are often calibrated wrong, and reason is the only way we can override them.

This essay first appeared on DarkReading.com.

Posted on November 4, 2009 at 7:12 AMView Comments

Detecting Terrorists by Smelling Fear

Really:

The technology relies on recognising a pheromone – or scent signal – produced in sweat when a person is scared.

Researchers hope the ”fear detector” will make it possible to identify individuals at check points who are up to no good.

Terrorists with murder in mind, drug smugglers, or criminals on the run are likely to be very fearful of being discovered.

Seems like yet another technology that will be swamped with false positives.

And is there any justification to the hypothesis that terrorists will be more afraid than anyone else? And do we know why people tend to feel fear? Is it because they’re up to no good, or because of more benign reasons—like they’re scared of something? This link from emotion to intent is very tenuous.

Posted on November 3, 2009 at 6:12 AMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Pepper Spray–Equipped ATMs

South Africa takes its security seriously. Here’s an ATM that automatically squirts pepper spray into the face of “people tampering with the card slots.”

Sounds cool, but these kinds of things are all about false positives:

But the mechanism backfired in one incident last week when pepper spray was inadvertently inhaled by three technicians who required treatment from paramedics.

Patrick Wadula, spokesman for the Absa bank, which is piloting the scheme, told the Mail & Guardian Online: “During a routine maintenance check at an Absa ATM in Fish Hoek, the pepper spray device was accidentally activated.

“At the time there were no customers using the ATM. However, the spray spread into the shopping centre where the ATMs are situated.”

Posted on July 17, 2009 at 1:04 PM

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

DNA False Positives

A story about a very expensive series of false positives. The German police spent years and millions of dollars tracking a mysterious killer whose DNA had been found at the scenes of six murders. Finally they realized they were tracking a worker at the factory that assembled the prepackaged swabs used for DNA testing.

This story could be used as justification for a massive DNA database. After all, if that factory worker had his or her DNA in the database, the police would have quickly realized what the problem was.

Posted on April 2, 2009 at 2:54 PMView Comments

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.